Class: Aws::CloudWatchLogs::Client

Inherits:
Seahorse::Client::Base show all
Includes:
Aws::ClientStubs
Defined in:
gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb

Overview

An API client for CloudWatchLogs. To construct a client, you need to configure a :region and :credentials.

client = Aws::CloudWatchLogs::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from Aws::ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, :session_token, and :account_id options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentials or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to 'true' the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the :endpoint option directly. This is normally constructed from the :region option. Configuring :endpoint is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :event_stream_handler (Proc)

    When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :input_event_stream_handler (Proc)

    When an EventStream or Proc object is provided, it can be used for sending events for the event stream.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :output_event_stream_handler (Proc)

    When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default :sigv4a_signing_region_set is searched for in the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disables response data type conversions. The request parameters hash must be formatted exactly as the API expects.This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling Aws::ClientStubs#stub_responses. See Aws::ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses NoOpTelemetryProvider which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the opentelemetry-sdk gem and then, pass in an instance of a Aws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::CloudWatchLogs::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::CloudWatchLogs::EndpointParameters.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a content-length).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.



462
463
464
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 462

def initialize(*args)
  super
end

Instance Method Details

#associate_kms_key(params = {}) ⇒ Struct

Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.

When you use AssociateKmsKey, you specify either the logGroupName parameter or the resourceIdentifier parameter. You can't specify both of those parameters in the same operation.

  • Specify the logGroupName parameter to cause all log events stored in the log group to be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key.

    Associating a KMS key with a log group overrides any existing associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.

    Associating a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query results encrypted with a KMS key, you must use an AssociateKmsKey operation with the resourceIdentifier parameter that specifies a query-result resource.

  • Specify the resourceIdentifier parameter with a query-result resource, to use that key to encrypt the stored results of all future StartQuery operations in the account. The response from a GetQueryResults operation will still return the query results in plain text.

    Even if you have not associated a key with your query results, the query results are encrypted when stored, using the default CloudWatch Logs method.

    If you run a query from a monitoring account that queries logs in a source account, the query results key from the monitoring account, if any, is used.

If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable.

CloudWatch Logs supports only symmetric KMS keys. Do not use an associate an asymmetric KMS key with your log group or query results. For more information, see Using Symmetric and Asymmetric Keys.

It can take up to 5 minutes for this operation to take effect.

If you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an InvalidParameterException error.

Examples:

Request syntax with placeholder values


resp = client.associate_kms_key({
  log_group_name: "LogGroupName",
  kms_key_id: "KmsKeyId", # required
  resource_identifier: "ResourceIdentifier",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group.

    In your AssociateKmsKey operation, you must specify either the resourceIdentifier parameter or the logGroup parameter, but you can't specify both.

  • :kms_key_id (required, String)

    The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. This must be a symmetric KMS key. For more information, see Amazon Resource Names and Using Symmetric and Asymmetric Keys.

  • :resource_identifier (String)

    Specifies the target for this operation. You must specify one of the following:

    • Specify the following ARN to have future GetQueryResults operations in this account encrypt the results with the specified KMS key. Replace REGION and ACCOUNT_ID with your Region and account ID.

      arn:aws:logs:REGION:ACCOUNT_ID:query-result:*

    • Specify the ARN of a log group to have CloudWatch Logs use the KMS key to encrypt log events that are ingested and stored by that log group. The log group ARN must be in the following format. Replace REGION and ACCOUNT_ID with your Region and account ID.

      arn:aws:logs:REGION:ACCOUNT_ID:log-group:LOG_GROUP_NAME

    In your AssociateKmsKey operation, you must specify either the resourceIdentifier parameter or the logGroup parameter, but you can't specify both.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



589
590
591
592
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 589

def associate_kms_key(params = {}, options = {})
  req = build_request(:associate_kms_key, params)
  req.send_request(options)
end

#cancel_export_task(params = {}) ⇒ Struct

Cancels the specified export task.

The task must be in the PENDING or RUNNING state.

Examples:

Request syntax with placeholder values


resp = client.cancel_export_task({
  task_id: "ExportTaskId", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :task_id (required, String)

    The ID of the export task.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



613
614
615
616
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 613

def cancel_export_task(params = {}, options = {})
  req = build_request(:cancel_export_task, params)
  req.send_request(options)
end

#create_delivery(params = {}) ⇒ Types::CreateDeliveryResponse

Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created.

Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

  • Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.

  • Create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.

  • If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.

  • Use CreateDelivery to create a delivery by pairing exactly one delivery source and one delivery destination.

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

To update an existing delivery configuration, use UpdateDeliveryConfiguration.

Examples:

Request syntax with placeholder values


resp = client.create_delivery({
  delivery_source_name: "DeliverySourceName", # required
  delivery_destination_arn: "Arn", # required
  record_fields: ["FieldHeader"],
  field_delimiter: "FieldDelimiter",
  s3_delivery_configuration: {
    suffix_path: "DeliverySuffixPath",
    enable_hive_compatible_path: false,
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.delivery.id #=> String
resp.delivery.arn #=> String
resp.delivery.delivery_source_name #=> String
resp.delivery.delivery_destination_arn #=> String
resp.delivery.delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.delivery.record_fields #=> Array
resp.delivery.record_fields[0] #=> String
resp.delivery.field_delimiter #=> String
resp.delivery.s3_delivery_configuration.suffix_path #=> String
resp.delivery.s3_delivery_configuration.enable_hive_compatible_path #=> Boolean
resp.delivery.tags #=> Hash
resp.delivery.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :delivery_source_name (required, String)

    The name of the delivery source to use for this delivery.

  • :delivery_destination_arn (required, String)

    The ARN of the delivery destination to use for this delivery.

  • :record_fields (Array<String>)

    The list of record fields to be delivered to the destination, in order. If the delivery's log source has mandatory fields, they must be included in this list.

  • :field_delimiter (String)

    The field delimiter to use between record fields when the final output format of a delivery is in Plain, W3C, or Raw format.

  • :s3_delivery_configuration (Types::S3DeliveryConfiguration)

    This structure contains parameters that are valid only when the delivery's delivery destination is an S3 bucket.

  • :tags (Hash<String,String>)

    An optional list of key-value pairs to associate with the resource.

    For more information about tagging, see Tagging Amazon Web Services resources

Returns:

See Also:



733
734
735
736
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 733

def create_delivery(params = {}, options = {})
  req = build_request(:create_delivery, params)
  req.send_request(options)
end

#create_export_task(params = {}) ⇒ Types::CreateExportTaskResponse

Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket. When you perform a CreateExportTask operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.

Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported.

Exporting to S3 buckets that are encrypted with AES-256 is supported.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities.

Examples:

Request syntax with placeholder values


resp = client.create_export_task({
  task_name: "ExportTaskName",
  log_group_name: "LogGroupName", # required
  log_stream_name_prefix: "LogStreamName",
  from: 1, # required
  to: 1, # required
  destination: "ExportDestinationBucket", # required
  destination_prefix: "ExportDestinationPrefix",
})

Response structure


resp.task_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :task_name (String)

    The name of the export task.

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name_prefix (String)

    Export only log streams that match the provided prefix. If you don't specify a value, no prefix filter is applied.

  • :from (required, Integer)

    The start time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp earlier than this time are not exported.

  • :to (required, Integer)

    The end time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not exported.

    You must specify a time that is not earlier than when this log group was created.

  • :destination (required, String)

    The name of S3 bucket for the exported log data. The bucket must be in the same Amazon Web Services Region.

  • :destination_prefix (String)

    The prefix used as the start of the key for every object exported. If you don't specify a value, the default is exportedlogs.

    The length of this parameter must comply with the S3 object key name length limits. The object key name is a sequence of Unicode characters with UTF-8 encoding, and can be up to 1,024 bytes.

Returns:

See Also:



832
833
834
835
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 832

def create_export_task(params = {}, options = {})
  req = build_request(:create_export_task, params)
  req.send_request(options)
end

#create_log_anomaly_detector(params = {}) ⇒ Types::CreateLogAnomalyDetectorResponse

Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs.

An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.

The anomaly detector uses pattern recognition to find anomalies, which are unusual log events. It uses the evaluationFrequency to compare current log events and patterns with trained baselines.

Fields within a pattern are called tokens. Fields that vary within a pattern, such as a request ID or timestamp, are referred to as dynamic tokens and represented by <*>.

The following is an example of a pattern:

[INFO] Request time: <*> ms

This pattern represents log events like [INFO] Request time: 327 ms and other similar log events that differ only by the number, in this csse 327. When the pattern is displayed, the different numbers are replaced by <*>

Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking.

Examples:

Request syntax with placeholder values


resp = client.create_log_anomaly_detector({
  log_group_arn_list: ["LogGroupArn"], # required
  detector_name: "DetectorName",
  evaluation_frequency: "ONE_MIN", # accepts ONE_MIN, FIVE_MIN, TEN_MIN, FIFTEEN_MIN, THIRTY_MIN, ONE_HOUR
  filter_pattern: "FilterPattern",
  kms_key_id: "KmsKeyId",
  anomaly_visibility_time: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.anomaly_detector_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_arn_list (required, Array<String>)

    An array containing the ARN of the log group that this anomaly detector will watch. You can specify only one log group ARN.

  • :detector_name (String)

    A name for this anomaly detector.

  • :evaluation_frequency (String)

    Specifies how often the anomaly detector is to run and look for anomalies. Set this value according to the frequency that the log group receives new logs. For example, if the log group receives new log events every 10 minutes, then 15 minutes might be a good setting for evaluationFrequency .

  • :filter_pattern (String)

    You can use this parameter to limit the anomaly detection model to examine only log events that match the pattern you specify here. For more information, see Filter and Pattern Syntax.

  • :kms_key_id (String)

    Optionally assigns a KMS key to secure this anomaly detector and its findings. If a key is assigned, the anomalies found and the model used by this detector are encrypted at rest with the key. If a key is assigned to an anomaly detector, a user must have permissions for both this key and for the anomaly detector to retrieve information about the anomalies that it finds.

    For more information about using a KMS key and to see the required IAM policy, see Use a KMS key with an anomaly detector.

  • :anomaly_visibility_time (Integer)

    The number of days to have visibility on an anomaly. After this time period has elapsed for an anomaly, it will be automatically baselined and the anomaly detector will treat new occurrences of a similar anomaly as normal. Therefore, if you do not correct the cause of an anomaly during the time period specified in anomalyVisibilityTime, it will be considered normal going forward and will not be detected as an anomaly.

  • :tags (Hash<String,String>)

    An optional list of key-value pairs to associate with the resource.

    For more information about tagging, see Tagging Amazon Web Services resources

Returns:

See Also:



958
959
960
961
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 958

def create_log_anomaly_detector(params = {}, options = {})
  req = build_request(:create_log_anomaly_detector, params)
  req.send_request(options)
end

#create_log_group(params = {}) ⇒ Struct

Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account.

You must use the following guidelines when naming a log group:

  • Log group names must be unique within a Region for an Amazon Web Services account.

  • Log group names can be between 1 and 512 characters long.

  • Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)

  • Log group names can't start with the string aws/

When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.

If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.

If you attempt to associate a KMS key with the log group but the KMS key does not exist or the KMS key is disabled, you receive an InvalidParameterException error.

CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys.

Examples:

Request syntax with placeholder values


resp = client.create_log_group({
  log_group_name: "LogGroupName", # required
  kms_key_id: "KmsKeyId",
  tags: {
    "TagKey" => "TagValue",
  },
  log_group_class: "STANDARD", # accepts STANDARD, INFREQUENT_ACCESS
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    A name for the log group.

  • :kms_key_id (String)

    The Amazon Resource Name (ARN) of the KMS key to use when encrypting log data. For more information, see Amazon Resource Names.

  • :tags (Hash<String,String>)

    The key-value pairs to use for the tags.

    You can grant users access to certain log groups while preventing them from accessing other log groups. To do so, tag your groups and use IAM policies that refer to those tags. To assign tags when you create a log group, you must have either the logs:TagResource or logs:TagLogGroup permission. For more information about tagging, see Tagging Amazon Web Services resources. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags.

  • :log_group_class (String)

    Use this parameter to specify the log group class for this log group. There are two classes:

    • The Standard log class supports all CloudWatch Logs features.

    • The Infrequent Access log class supports a subset of CloudWatch Logs features and incurs lower costs.

    If you omit this parameter, the default of STANDARD is used.

    The value of logGroupClass can't be changed after a log group is created.

    For details about the features supported by each class, see Log classes

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1067
1068
1069
1070
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1067

def create_log_group(params = {}, options = {})
  req = build_request(:create_log_group, params)
  req.send_request(options)
end

#create_log_stream(params = {}) ⇒ Struct

Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.

There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on CreateLogStream operations, after which transactions are throttled.

You must use the following guidelines when naming a log stream:

  • Log stream names must be unique within the log group.

  • Log stream names can be between 1 and 512 characters long.

  • Don't use ':' (colon) or '*' (asterisk) characters.

Examples:

Request syntax with placeholder values


resp = client.create_log_stream({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1107
1108
1109
1110
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1107

def create_log_stream(params = {}, options = {})
  req = build_request(:create_log_stream, params)
  req.send_request(options)
end

#delete_account_policy(params = {}) ⇒ Struct

Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect.

To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.

  • To delete a data protection policy, you must have the logs:DeleteDataProtectionPolicy and logs:DeleteAccountPolicy permissions.

  • To delete a subscription filter policy, you must have the logs:DeleteSubscriptionFilter and logs:DeleteAccountPolicy permissions.

  • To delete a transformer policy, you must have the logs:DeleteTransformer and logs:DeleteAccountPolicy permissions.

  • To delete a field index policy, you must have the logs:DeleteIndexPolicy and logs:DeleteAccountPolicy permissions.

If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries.

Examples:

Request syntax with placeholder values


resp = client.({
  policy_name: "PolicyName", # required
  policy_type: "DATA_PROTECTION_POLICY", # required, accepts DATA_PROTECTION_POLICY, SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY, TRANSFORMER_POLICY
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_name (required, String)

    The name of the policy to delete.

  • :policy_type (required, String)

    The type of policy to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1157
1158
1159
1160
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1157

def (params = {}, options = {})
  req = build_request(:delete_account_policy, params)
  req.send_request(options)
end

#delete_data_protection_policy(params = {}) ⇒ Struct

Deletes the data protection policy from the specified log group.

For more information about data protection policies, see PutDataProtectionPolicy.

Examples:

Request syntax with placeholder values


resp = client.delete_data_protection_policy({
  log_group_identifier: "LogGroupIdentifier", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    The name or ARN of the log group that you want to delete the data protection policy for.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1187
1188
1189
1190
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1187

def delete_data_protection_policy(params = {}, options = {})
  req = build_request(:delete_data_protection_policy, params)
  req.send_request(options)
end

#delete_delivery(params = {}) ⇒ Struct

Deletes s delivery. A delivery is a connection between a logical delivery source and a logical delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source.

Examples:

Request syntax with placeholder values


resp = client.delete_delivery({
  id: "DeliveryId", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The unique ID of the delivery to delete. You can find the ID of a delivery with the DescribeDeliveries operation.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1218
1219
1220
1221
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1218

def delete_delivery(params = {}, options = {})
  req = build_request(:delete_delivery, params)
  req.send_request(options)
end

#delete_delivery_destination(params = {}) ⇒ Struct

Deletes a delivery destination. A delivery is a connection between a logical delivery source and a logical delivery destination.

You can't delete a delivery destination if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery destination, use the DescribeDeliveries operation and check the deliveryDestinationArn field in the results.

Examples:

Request syntax with placeholder values


resp = client.delete_delivery_destination({
  name: "DeliveryDestinationName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the delivery destination that you want to delete. You can find a list of delivery destionation names by using the DescribeDeliveryDestinations operation.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1256
1257
1258
1259
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1256

def delete_delivery_destination(params = {}, options = {})
  req = build_request(:delete_delivery_destination, params)
  req.send_request(options)
end

#delete_delivery_destination_policy(params = {}) ⇒ Struct

Deletes a delivery destination policy. For more information about these policies, see PutDeliveryDestinationPolicy.

Examples:

Request syntax with placeholder values


resp = client.delete_delivery_destination_policy({
  delivery_destination_name: "DeliveryDestinationName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :delivery_destination_name (required, String)

    The name of the delivery destination that you want to delete the policy for.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1284
1285
1286
1287
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1284

def delete_delivery_destination_policy(params = {}, options = {})
  req = build_request(:delete_delivery_destination_policy, params)
  req.send_request(options)
end

#delete_delivery_source(params = {}) ⇒ Struct

Deletes a delivery source. A delivery is a connection between a logical delivery source and a logical delivery destination.

You can't delete a delivery source if any current deliveries are associated with it. To find whether any deliveries are associated with this delivery source, use the DescribeDeliveries operation and check the deliverySourceName field in the results.

Examples:

Request syntax with placeholder values


resp = client.delete_delivery_source({
  name: "DeliverySourceName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the delivery source that you want to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1316
1317
1318
1319
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1316

def delete_delivery_source(params = {}, options = {})
  req = build_request(:delete_delivery_source, params)
  req.send_request(options)
end

#delete_destination(params = {}) ⇒ Struct

Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.

Examples:

Request syntax with placeholder values


resp = client.delete_destination({
  destination_name: "DestinationName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :destination_name (required, String)

    The name of the destination.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1340
1341
1342
1343
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1340

def delete_destination(params = {}, options = {})
  req = build_request(:delete_destination, params)
  req.send_request(options)
end

#delete_index_policy(params = {}) ⇒ Struct

Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries.

You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy.

If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events.

Examples:

Request syntax with placeholder values


resp = client.delete_index_policy({
  log_group_identifier: "LogGroupIdentifier", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    The log group to delete the index policy for. You can specify either the name or the ARN of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1378
1379
1380
1381
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1378

def delete_index_policy(params = {}, options = {})
  req = build_request(:delete_index_policy, params)
  req.send_request(options)
end

#delete_integration(params = {}) ⇒ Struct

Deletes the integration between CloudWatch Logs and OpenSearch Service. If your integration has active vended logs dashboards, you must specify true for the force parameter, otherwise the operation will fail. If you delete the integration by setting force to true, all your vended logs dashboards powered by OpenSearch Service will be deleted and the data that was on them will no longer be accessible.

Examples:

Request syntax with placeholder values


resp = client.delete_integration({
  integration_name: "IntegrationName", # required
  force: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_name (required, String)

    The name of the integration to delete. To find the name of your integration, use ListIntegrations.

  • :force (Boolean)

    Specify true to force the deletion of the integration even if vended logs dashboards currently exist.

    The default is false.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1417
1418
1419
1420
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1417

def delete_integration(params = {}, options = {})
  req = build_request(:delete_integration, params)
  req.send_request(options)
end

#delete_log_anomaly_detector(params = {}) ⇒ Struct

Deletes the specified CloudWatch Logs anomaly detector.

Examples:

Request syntax with placeholder values


resp = client.delete_log_anomaly_detector({
  anomaly_detector_arn: "AnomalyDetectorArn", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :anomaly_detector_arn (required, String)

    The ARN of the anomaly detector to delete. You can find the ARNs of log anomaly detectors in your account by using the ListLogAnomalyDetectors operation.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1445
1446
1447
1448
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1445

def delete_log_anomaly_detector(params = {}, options = {})
  req = build_request(:delete_log_anomaly_detector, params)
  req.send_request(options)
end

#delete_log_group(params = {}) ⇒ Struct

Deletes the specified log group and permanently deletes all the archived log events associated with the log group.

Examples:

Request syntax with placeholder values


resp = client.delete_log_group({
  log_group_name: "LogGroupName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1468
1469
1470
1471
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1468

def delete_log_group(params = {}, options = {})
  req = build_request(:delete_log_group, params)
  req.send_request(options)
end

#delete_log_stream(params = {}) ⇒ Struct

Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.

Examples:

Request syntax with placeholder values


resp = client.delete_log_stream({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1495
1496
1497
1498
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1495

def delete_log_stream(params = {}, options = {})
  req = build_request(:delete_log_stream, params)
  req.send_request(options)
end

#delete_metric_filter(params = {}) ⇒ Struct

Deletes the specified metric filter.

Examples:

Request syntax with placeholder values


resp = client.delete_metric_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    The name of the metric filter.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1521
1522
1523
1524
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1521

def delete_metric_filter(params = {}, options = {})
  req = build_request(:delete_metric_filter, params)
  req.send_request(options)
end

#delete_query_definition(params = {}) ⇒ Types::DeleteQueryDefinitionResponse

Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.

Each DeleteQueryDefinition operation can delete one query definition.

You must have the logs:DeleteQueryDefinition permission to be able to perform this operation.

Examples:

Request syntax with placeholder values


resp = client.delete_query_definition({
  query_definition_id: "QueryId", # required
})

Response structure


resp.success #=> Boolean

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_definition_id (required, String)

    The ID of the query definition that you want to delete. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions.

Returns:

See Also:



1563
1564
1565
1566
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1563

def delete_query_definition(params = {}, options = {})
  req = build_request(:delete_query_definition, params)
  req.send_request(options)
end

#delete_resource_policy(params = {}) ⇒ Struct

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.

Examples:

Request syntax with placeholder values


resp = client.delete_resource_policy({
  policy_name: "PolicyName",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_name (String)

    The name of the policy to be revoked. This parameter is required.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1586
1587
1588
1589
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1586

def delete_resource_policy(params = {}, options = {})
  req = build_request(:delete_resource_policy, params)
  req.send_request(options)
end

#delete_retention_policy(params = {}) ⇒ Struct

Deletes the specified retention policy.

Log events do not expire if they belong to log groups without a retention policy.

Examples:

Request syntax with placeholder values


resp = client.delete_retention_policy({
  log_group_name: "LogGroupName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1611
1612
1613
1614
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1611

def delete_retention_policy(params = {}, options = {})
  req = build_request(:delete_retention_policy, params)
  req.send_request(options)
end

#delete_subscription_filter(params = {}) ⇒ Struct

Deletes the specified subscription filter.

Examples:

Request syntax with placeholder values


resp = client.delete_subscription_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    The name of the subscription filter.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1637
1638
1639
1640
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1637

def delete_subscription_filter(params = {}, options = {})
  req = build_request(:delete_subscription_filter, params)
  req.send_request(options)
end

#delete_transformer(params = {}) ⇒ Struct

Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted.

After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events.

Examples:

Request syntax with placeholder values


resp = client.delete_transformer({
  log_group_identifier: "LogGroupIdentifier", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    Specify either the name or ARN of the log group to delete the transformer for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1670
1671
1672
1673
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1670

def delete_transformer(params = {}, options = {})
  req = build_request(:delete_transformer, params)
  req.send_request(options)
end

#describe_account_policies(params = {}) ⇒ Types::DescribeAccountPoliciesResponse

Returns a list of all CloudWatch Logs account policies in the account.

Examples:

Request syntax with placeholder values


resp = client.({
  policy_type: "DATA_PROTECTION_POLICY", # required, accepts DATA_PROTECTION_POLICY, SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY, TRANSFORMER_POLICY
  policy_name: "PolicyName",
  account_identifiers: ["AccountId"],
  next_token: "NextToken",
})

Response structure


resp. #=> Array
resp.[0].policy_name #=> String
resp.[0].policy_document #=> String
resp.[0].last_updated_time #=> Integer
resp.[0].policy_type #=> String, one of "DATA_PROTECTION_POLICY", "SUBSCRIPTION_FILTER_POLICY", "FIELD_INDEX_POLICY", "TRANSFORMER_POLICY"
resp.[0].scope #=> String, one of "ALL"
resp.[0].selection_criteria #=> String
resp.[0]. #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_type (required, String)

    Use this parameter to limit the returned policies to only the policies that match the policy type that you specify.

  • :policy_name (String)

    Use this parameter to limit the returned policies to only the policy with the name that you specify.

  • :account_identifiers (Array<String>)

    If you are using an account that is set up as a monitoring account for CloudWatch unified cross-account observability, you can use this to specify the account ID of a source account. If you do, the operation returns the account policy for the specified account. Currently, you can specify only one account ID in this parameter.

    If you omit this parameter, only the policy in the current account is returned.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

Returns:

See Also:



1729
1730
1731
1732
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1729

def (params = {}, options = {})
  req = build_request(:describe_account_policies, params)
  req.send_request(options)
end

#describe_configuration_templates(params = {}) ⇒ Types::DescribeConfigurationTemplatesResponse

Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see CreateDelivery.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_configuration_templates({
  service: "Service",
  log_types: ["LogType"],
  resource_types: ["ResourceType"],
  delivery_destination_types: ["S3"], # accepts S3, CWL, FH
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.configuration_templates #=> Array
resp.configuration_templates[0].service #=> String
resp.configuration_templates[0].log_type #=> String
resp.configuration_templates[0].resource_type #=> String
resp.configuration_templates[0].delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.configuration_templates[0].default_delivery_config_values.record_fields #=> Array
resp.configuration_templates[0].default_delivery_config_values.record_fields[0] #=> String
resp.configuration_templates[0].default_delivery_config_values.field_delimiter #=> String
resp.configuration_templates[0].default_delivery_config_values.s3_delivery_configuration.suffix_path #=> String
resp.configuration_templates[0].default_delivery_config_values.s3_delivery_configuration.enable_hive_compatible_path #=> Boolean
resp.configuration_templates[0].allowed_fields #=> Array
resp.configuration_templates[0].allowed_fields[0].name #=> String
resp.configuration_templates[0].allowed_fields[0].mandatory #=> Boolean
resp.configuration_templates[0].allowed_output_formats #=> Array
resp.configuration_templates[0].allowed_output_formats[0] #=> String, one of "json", "plain", "w3c", "raw", "parquet"
resp.configuration_templates[0].allowed_action_for_allow_vended_logs_delivery_for_resource #=> String
resp.configuration_templates[0].allowed_field_delimiters #=> Array
resp.configuration_templates[0].allowed_field_delimiters[0] #=> String
resp.configuration_templates[0].allowed_suffix_path_fields #=> Array
resp.configuration_templates[0].allowed_suffix_path_fields[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :service (String)

    Use this parameter to filter the response to include only the configuration templates that apply to the Amazon Web Services service that you specify here.

  • :log_types (Array<String>)

    Use this parameter to filter the response to include only the configuration templates that apply to the log types that you specify here.

  • :resource_types (Array<String>)

    Use this parameter to filter the response to include only the configuration templates that apply to the resource types that you specify here.

  • :delivery_destination_types (Array<String>)

    Use this parameter to filter the response to include only the configuration templates that apply to the delivery destination types that you specify here.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    Use this parameter to limit the number of configuration templates that are returned in the response.

Returns:

See Also:



1817
1818
1819
1820
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1817

def describe_configuration_templates(params = {}, options = {})
  req = build_request(:describe_configuration_templates, params)
  req.send_request(options)
end

#describe_deliveries(params = {}) ⇒ Types::DescribeDeliveriesResponse

Retrieves a list of the deliveries that have been created in the account.

A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_deliveries({
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.deliveries #=> Array
resp.deliveries[0].id #=> String
resp.deliveries[0].arn #=> String
resp.deliveries[0].delivery_source_name #=> String
resp.deliveries[0].delivery_destination_arn #=> String
resp.deliveries[0].delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.deliveries[0].record_fields #=> Array
resp.deliveries[0].record_fields[0] #=> String
resp.deliveries[0].field_delimiter #=> String
resp.deliveries[0].s3_delivery_configuration.suffix_path #=> String
resp.deliveries[0].s3_delivery_configuration.enable_hive_compatible_path #=> Boolean
resp.deliveries[0].tags #=> Hash
resp.deliveries[0].tags["TagKey"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    Optionally specify the maximum number of deliveries to return in the response.

Returns:

See Also:



1883
1884
1885
1886
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1883

def describe_deliveries(params = {}, options = {})
  req = build_request(:describe_deliveries, params)
  req.send_request(options)
end

#describe_delivery_destinations(params = {}) ⇒ Types::DescribeDeliveryDestinationsResponse

Retrieves a list of the delivery destinations that have been created in the account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_delivery_destinations({
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.delivery_destinations #=> Array
resp.delivery_destinations[0].name #=> String
resp.delivery_destinations[0].arn #=> String
resp.delivery_destinations[0].delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.delivery_destinations[0].output_format #=> String, one of "json", "plain", "w3c", "raw", "parquet"
resp.delivery_destinations[0].delivery_destination_configuration.destination_resource_arn #=> String
resp.delivery_destinations[0].tags #=> Hash
resp.delivery_destinations[0].tags["TagKey"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    Optionally specify the maximum number of delivery destinations to return in the response.

Returns:

See Also:



1929
1930
1931
1932
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1929

def describe_delivery_destinations(params = {}, options = {})
  req = build_request(:describe_delivery_destinations, params)
  req.send_request(options)
end

#describe_delivery_sources(params = {}) ⇒ Types::DescribeDeliverySourcesResponse

Retrieves a list of the delivery sources that have been created in the account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_delivery_sources({
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.delivery_sources #=> Array
resp.delivery_sources[0].name #=> String
resp.delivery_sources[0].arn #=> String
resp.delivery_sources[0].resource_arns #=> Array
resp.delivery_sources[0].resource_arns[0] #=> String
resp.delivery_sources[0].service #=> String
resp.delivery_sources[0].log_type #=> String
resp.delivery_sources[0].tags #=> Hash
resp.delivery_sources[0].tags["TagKey"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    Optionally specify the maximum number of delivery sources to return in the response.

Returns:

See Also:



1976
1977
1978
1979
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1976

def describe_delivery_sources(params = {}, options = {})
  req = build_request(:describe_delivery_sources, params)
  req.send_request(options)
end

#describe_destinations(params = {}) ⇒ Types::DescribeDestinationsResponse

Lists all your destinations. The results are ASCII-sorted by destination name.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_destinations({
  destination_name_prefix: "DestinationName",
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.destinations #=> Array
resp.destinations[0].destination_name #=> String
resp.destinations[0].target_arn #=> String
resp.destinations[0].role_arn #=> String
resp.destinations[0].access_policy #=> String
resp.destinations[0].arn #=> String
resp.destinations[0].creation_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :destination_name_prefix (String)

    The prefix to match. If you don't specify a value, no prefix filter is applied.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default maximum value of 50 items is used.

Returns:

See Also:



2026
2027
2028
2029
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2026

def describe_destinations(params = {}, options = {})
  req = build_request(:describe_destinations, params)
  req.send_request(options)
end

#describe_export_tasks(params = {}) ⇒ Types::DescribeExportTasksResponse

Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.

Examples:

Request syntax with placeholder values


resp = client.describe_export_tasks({
  task_id: "ExportTaskId",
  status_code: "CANCELLED", # accepts CANCELLED, COMPLETED, FAILED, PENDING, PENDING_CANCEL, RUNNING
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.export_tasks #=> Array
resp.export_tasks[0].task_id #=> String
resp.export_tasks[0].task_name #=> String
resp.export_tasks[0].log_group_name #=> String
resp.export_tasks[0].from #=> Integer
resp.export_tasks[0].to #=> Integer
resp.export_tasks[0].destination #=> String
resp.export_tasks[0].destination_prefix #=> String
resp.export_tasks[0].status.code #=> String, one of "CANCELLED", "COMPLETED", "FAILED", "PENDING", "PENDING_CANCEL", "RUNNING"
resp.export_tasks[0].status.message #=> String
resp.export_tasks[0].execution_info.creation_time #=> Integer
resp.export_tasks[0].execution_info.completion_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :task_id (String)

    The ID of the export task. Specifying a task ID filters the results to one or zero export tasks.

  • :status_code (String)

    The status code of the export task. Specifying a status code filters the results to zero or more export tasks.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

Returns:

See Also:



2084
2085
2086
2087
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2084

def describe_export_tasks(params = {}, options = {})
  req = build_request(:describe_export_tasks, params)
  req.send_request(options)
end

#describe_field_indexes(params = {}) ⇒ Types::DescribeFieldIndexesResponse

Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy.

Examples:

Request syntax with placeholder values


resp = client.describe_field_indexes({
  log_group_identifiers: ["LogGroupIdentifier"], # required
  next_token: "NextToken",
})

Response structure


resp.field_indexes #=> Array
resp.field_indexes[0].log_group_identifier #=> String
resp.field_indexes[0].field_index_name #=> String
resp.field_indexes[0].last_scan_time #=> Integer
resp.field_indexes[0].first_event_time #=> Integer
resp.field_indexes[0].last_event_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifiers (required, Array<String>)

    An array containing the names or ARNs of the log groups that you want to retrieve field indexes for.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:



2131
2132
2133
2134
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2131

def describe_field_indexes(params = {}, options = {})
  req = build_request(:describe_field_indexes, params)
  req.send_request(options)
end

#describe_index_policies(params = {}) ⇒ Types::DescribeIndexPoliciesResponse

Returns the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy.

If a specified log group has a log-group level index policy, that policy is returned by this operation.

If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation.

To find information about only account-level policies, use DescribeAccountPolicies instead.

Examples:

Request syntax with placeholder values


resp = client.describe_index_policies({
  log_group_identifiers: ["LogGroupIdentifier"], # required
  next_token: "NextToken",
})

Response structure


resp.index_policies #=> Array
resp.index_policies[0].log_group_identifier #=> String
resp.index_policies[0].last_update_time #=> Integer
resp.index_policies[0].policy_document #=> String
resp.index_policies[0].policy_name #=> String
resp.index_policies[0].source #=> String, one of "ACCOUNT", "LOG_GROUP"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifiers (required, Array<String>)

    An array containing the name or ARN of the log group that you want to retrieve field index policies for.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:



2188
2189
2190
2191
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2188

def describe_index_policies(params = {}, options = {})
  req = build_request(:describe_index_policies, params)
  req.send_request(options)
end

#describe_log_groups(params = {}) ⇒ Types::DescribeLogGroupsResponse

Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.

CloudWatch Logs doesn't support IAM policies that control access to the DescribeLogGroups action by using the aws:ResourceTag/key-name condition key. Other CloudWatch Logs actions do support the use of the aws:ResourceTag/key-name condition key to control access. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_log_groups({
  account_identifiers: ["AccountId"],
  log_group_name_prefix: "LogGroupName",
  log_group_name_pattern: "LogGroupNamePattern",
  next_token: "NextToken",
  limit: 1,
  include_linked_accounts: false,
  log_group_class: "STANDARD", # accepts STANDARD, INFREQUENT_ACCESS
})

Response structure


resp.log_groups #=> Array
resp.log_groups[0].log_group_name #=> String
resp.log_groups[0].creation_time #=> Integer
resp.log_groups[0].retention_in_days #=> Integer
resp.log_groups[0].metric_filter_count #=> Integer
resp.log_groups[0].arn #=> String
resp.log_groups[0].stored_bytes #=> Integer
resp.log_groups[0].kms_key_id #=> String
resp.log_groups[0].data_protection_status #=> String, one of "ACTIVATED", "DELETED", "ARCHIVED", "DISABLED"
resp.log_groups[0].inherited_properties #=> Array
resp.log_groups[0].inherited_properties[0] #=> String, one of "ACCOUNT_DATA_PROTECTION"
resp.log_groups[0].log_group_class #=> String, one of "STANDARD", "INFREQUENT_ACCESS"
resp.log_groups[0].log_group_arn #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :account_identifiers (Array<String>)

    When includeLinkedAccounts is set to True, use this parameter to specify the list of accounts to search. You can specify as many as 20 account IDs in the array.

  • :log_group_name_prefix (String)

    The prefix to match.

    logGroupNamePrefix and logGroupNamePattern are mutually exclusive. Only one of these parameters can be passed.

  • :log_group_name_pattern (String)

    If you specify a string for this parameter, the operation returns only log groups that have names that match the string based on a case-sensitive substring search. For example, if you specify Foo, log groups named FooBar, aws/Foo, and GroupFoo would match, but foo, F/o/o and Froo would not match.

    If you specify logGroupNamePattern in your request, then only arn, creationTime, and logGroupName are included in the response.

    logGroupNamePattern and logGroupNamePrefix are mutually exclusive. Only one of these parameters can be passed.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

  • :include_linked_accounts (Boolean)

    If you are using a monitoring account, set this to True to have the operation return log groups in the accounts listed in accountIdentifiers.

    If this parameter is set to true and accountIdentifiers contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account.

  • :log_group_class (String)

    Specifies the log group class for this log group. There are two classes:

    • The Standard log class supports all CloudWatch Logs features.

    • The Infrequent Access log class supports a subset of CloudWatch Logs features and incurs lower costs.

    For details about the features supported by each class, see Log classes

Returns:

See Also:



2316
2317
2318
2319
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2316

def describe_log_groups(params = {}, options = {})
  req = build_request(:describe_log_groups, params)
  req.send_request(options)
end

#describe_log_streams(params = {}) ⇒ Types::DescribeLogStreamsResponse

Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.

You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.

This operation has a limit of five transactions per second, after which transactions are throttled.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_log_streams({
  log_group_name: "LogGroupName",
  log_group_identifier: "LogGroupIdentifier",
  log_stream_name_prefix: "LogStreamName",
  order_by: "LogStreamName", # accepts LogStreamName, LastEventTime
  descending: false,
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.log_streams #=> Array
resp.log_streams[0].log_stream_name #=> String
resp.log_streams[0].creation_time #=> Integer
resp.log_streams[0].first_event_timestamp #=> Integer
resp.log_streams[0].last_event_timestamp #=> Integer
resp.log_streams[0].last_ingestion_time #=> Integer
resp.log_streams[0].upload_sequence_token #=> String
resp.log_streams[0].arn #=> String
resp.log_streams[0].stored_bytes #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_group_identifier (String)

    Specify either the name or ARN of the log group to view. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_stream_name_prefix (String)

    The prefix to match.

    If orderBy is LastEventTime, you cannot specify this parameter.

  • :order_by (String)

    If the value is LogStreamName, the results are ordered by log stream name. If the value is LastEventTime, the results are ordered by the event time. The default value is LogStreamName.

    If you order the results by event time, you cannot specify the logStreamNamePrefix parameter.

    lastEventTimestamp represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTimestamp updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer.

  • :descending (Boolean)

    If the value is true, results are returned in descending order. If the value is to false, results are returned in ascending order. The default value is false.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

Returns:

See Also:



2428
2429
2430
2431
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2428

def describe_log_streams(params = {}, options = {})
  req = build_request(:describe_log_streams, params)
  req.send_request(options)
end

#describe_metric_filters(params = {}) ⇒ Types::DescribeMetricFiltersResponse

Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_metric_filters({
  log_group_name: "LogGroupName",
  filter_name_prefix: "FilterName",
  next_token: "NextToken",
  limit: 1,
  metric_name: "MetricName",
  metric_namespace: "MetricNamespace",
})

Response structure


resp.metric_filters #=> Array
resp.metric_filters[0].filter_name #=> String
resp.metric_filters[0].filter_pattern #=> String
resp.metric_filters[0].metric_transformations #=> Array
resp.metric_filters[0].metric_transformations[0].metric_name #=> String
resp.metric_filters[0].metric_transformations[0].metric_namespace #=> String
resp.metric_filters[0].metric_transformations[0].metric_value #=> String
resp.metric_filters[0].metric_transformations[0].default_value #=> Float
resp.metric_filters[0].metric_transformations[0].dimensions #=> Hash
resp.metric_filters[0].metric_transformations[0].dimensions["DimensionsKey"] #=> String
resp.metric_filters[0].metric_transformations[0].unit #=> String, one of "Seconds", "Microseconds", "Milliseconds", "Bytes", "Kilobytes", "Megabytes", "Gigabytes", "Terabytes", "Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", "Percent", "Count", "Bytes/Second", "Kilobytes/Second", "Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", "Bits/Second", "Kilobits/Second", "Megabits/Second", "Gigabits/Second", "Terabits/Second", "Count/Second", "None"
resp.metric_filters[0].creation_time #=> Integer
resp.metric_filters[0].log_group_name #=> String
resp.metric_filters[0].apply_on_transformed_logs #=> Boolean
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group.

  • :filter_name_prefix (String)

    The prefix to match. CloudWatch Logs uses the value that you set here only if you also include the logGroupName parameter in your request.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

  • :metric_name (String)

    Filters results to include only those with the specified metric name. If you include this parameter in your request, you must also include the metricNamespace parameter.

  • :metric_namespace (String)

    Filters results to include only those in the specified namespace. If you include this parameter in your request, you must also include the metricName parameter.

Returns:

See Also:



2502
2503
2504
2505
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2502

def describe_metric_filters(params = {}, options = {})
  req = build_request(:describe_metric_filters, params)
  req.send_request(options)
end

#describe_queries(params = {}) ⇒ Types::DescribeQueriesResponse

Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status.

Examples:

Request syntax with placeholder values


resp = client.describe_queries({
  log_group_name: "LogGroupName",
  status: "Scheduled", # accepts Scheduled, Running, Complete, Failed, Cancelled, Timeout, Unknown
  max_results: 1,
  next_token: "NextToken",
  query_language: "CWLI", # accepts CWLI, SQL, PPL
})

Response structure


resp.queries #=> Array
resp.queries[0].query_language #=> String, one of "CWLI", "SQL", "PPL"
resp.queries[0].query_id #=> String
resp.queries[0].query_string #=> String
resp.queries[0].status #=> String, one of "Scheduled", "Running", "Complete", "Failed", "Cancelled", "Timeout", "Unknown"
resp.queries[0].create_time #=> Integer
resp.queries[0].log_group_name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    Limits the returned queries to only those for the specified log group.

  • :status (String)

    Limits the returned queries to only those that have the specified status. Valid values are Cancelled, Complete, Failed, Running, and Scheduled.

  • :max_results (Integer)

    Limits the number of returned queries to the specified number.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :query_language (String)

    Limits the returned queries to only the queries that use the specified query language.

Returns:

See Also:



2561
2562
2563
2564
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2561

def describe_queries(params = {}, options = {})
  req = build_request(:describe_queries, params)
  req.send_request(options)
end

#describe_query_definitions(params = {}) ⇒ Types::DescribeQueryDefinitionsResponse

This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account.

You can use the queryDefinitionNamePrefix parameter to limit the results to only the query definitions that have names that start with a certain string.

Examples:

Request syntax with placeholder values


resp = client.describe_query_definitions({
  query_language: "CWLI", # accepts CWLI, SQL, PPL
  query_definition_name_prefix: "QueryDefinitionName",
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.query_definitions #=> Array
resp.query_definitions[0].query_language #=> String, one of "CWLI", "SQL", "PPL"
resp.query_definitions[0].query_definition_id #=> String
resp.query_definitions[0].name #=> String
resp.query_definitions[0].query_string #=> String
resp.query_definitions[0].last_modified #=> Integer
resp.query_definitions[0].log_group_names #=> Array
resp.query_definitions[0].log_group_names[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_language (String)

    The query language used for this query. For more information about the query languages that CloudWatch Logs supports, see Supported query languages.

  • :query_definition_name_prefix (String)

    Use this parameter to filter your results to only the query definitions that have names that start with the prefix you specify.

  • :max_results (Integer)

    Limits the number of returned query definitions to the specified number.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:



2626
2627
2628
2629
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2626

def describe_query_definitions(params = {}, options = {})
  req = build_request(:describe_query_definitions, params)
  req.send_request(options)
end

#describe_resource_policies(params = {}) ⇒ Types::DescribeResourcePoliciesResponse

Lists the resource policies in this account.

Examples:

Request syntax with placeholder values


resp = client.describe_resource_policies({
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.resource_policies #=> Array
resp.resource_policies[0].policy_name #=> String
resp.resource_policies[0].policy_document #=> String
resp.resource_policies[0].last_updated_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    The maximum number of resource policies to be displayed with one call of this API.

Returns:

See Also:



2665
2666
2667
2668
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2665

def describe_resource_policies(params = {}, options = {})
  req = build_request(:describe_resource_policies, params)
  req.send_request(options)
end

#describe_subscription_filters(params = {}) ⇒ Types::DescribeSubscriptionFiltersResponse

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_subscription_filters({
  log_group_name: "LogGroupName", # required
  filter_name_prefix: "FilterName",
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.subscription_filters #=> Array
resp.subscription_filters[0].filter_name #=> String
resp.subscription_filters[0].log_group_name #=> String
resp.subscription_filters[0].filter_pattern #=> String
resp.subscription_filters[0].destination_arn #=> String
resp.subscription_filters[0].role_arn #=> String
resp.subscription_filters[0].distribution #=> String, one of "Random", "ByLogStream"
resp.subscription_filters[0].apply_on_transformed_logs #=> Boolean
resp.subscription_filters[0].creation_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name_prefix (String)

    The prefix to match. If you don't specify a value, no prefix filter is applied.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

Returns:

See Also:



2722
2723
2724
2725
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2722

def describe_subscription_filters(params = {}, options = {})
  req = build_request(:describe_subscription_filters, params)
  req.send_request(options)
end

#disassociate_kms_key(params = {}) ⇒ Struct

Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.

When you use DisassociateKmsKey, you specify either the logGroupName parameter or the resourceIdentifier parameter. You can't specify both of those parameters in the same operation.

  • Specify the logGroupName parameter to stop using the KMS key to encrypt future log events ingested and stored in the log group. Instead, they will be encrypted with the default CloudWatch Logs method. The log events that were ingested while the key was associated with the log group are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.

  • Specify the resourceIdentifier parameter with the query-result resource to stop using the KMS key to encrypt the results of all future StartQuery operations in the account. They will instead be encrypted with the default CloudWatch Logs method. The results from queries that ran while the key was associated with the account are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.

It can take up to 5 minutes for this operation to take effect.

Examples:

Request syntax with placeholder values


resp = client.disassociate_kms_key({
  log_group_name: "LogGroupName",
  resource_identifier: "ResourceIdentifier",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group.

    In your DisassociateKmsKey operation, you must specify either the resourceIdentifier parameter or the logGroup parameter, but you can't specify both.

  • :resource_identifier (String)

    Specifies the target for this operation. You must specify one of the following:

    • Specify the ARN of a log group to stop having CloudWatch Logs use the KMS key to encrypt log events that are ingested and stored by that log group. After you run this operation, CloudWatch Logs encrypts ingested log events with the default CloudWatch Logs method. The log group ARN must be in the following format. Replace REGION and ACCOUNT_ID with your Region and account ID.

      arn:aws:logs:REGION:ACCOUNT_ID:log-group:LOG_GROUP_NAME

    • Specify the following ARN to stop using this key to encrypt the results of future StartQuery operations in this account. Replace REGION and ACCOUNT_ID with your Region and account ID.

      arn:aws:logs:REGION:ACCOUNT_ID:query-result:*

    In your DisssociateKmsKey operation, you must specify either the resourceIdentifier parameter or the logGroup parameter, but you can't specify both.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2803
2804
2805
2806
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2803

def disassociate_kms_key(params = {}, options = {})
  req = build_request(:disassociate_kms_key, params)
  req.send_request(options)
end

#filter_log_events(params = {}) ⇒ Types::FilterLogEventsResponse

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

You must have the logs:FilterLogEvents permission to perform this operation.

You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.

The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.filter_log_events({
  log_group_name: "LogGroupName",
  log_group_identifier: "LogGroupIdentifier",
  log_stream_names: ["LogStreamName"],
  log_stream_name_prefix: "LogStreamName",
  start_time: 1,
  end_time: 1,
  filter_pattern: "FilterPattern",
  next_token: "NextToken",
  limit: 1,
  interleaved: false,
  unmask: false,
})

Response structure


resp.events #=> Array
resp.events[0].log_stream_name #=> String
resp.events[0].timestamp #=> Integer
resp.events[0].message #=> String
resp.events[0].ingestion_time #=> Integer
resp.events[0].event_id #=> String
resp.searched_log_streams #=> Array
resp.searched_log_streams[0].log_stream_name #=> String
resp.searched_log_streams[0].searched_completely #=> Boolean
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group to search.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_group_identifier (String)

    Specify either the name or ARN of the log group to view log events from. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_stream_names (Array<String>)

    Filters the results to only logs from the log streams in this list.

    If you specify a value for both logStreamNames and logStreamNamePrefix, the action returns an InvalidParameterException error.

  • :log_stream_name_prefix (String)

    Filters the results to include only events from log streams that have names starting with this prefix.

    If you specify a value for both logStreamNamePrefix and logStreamNames, the action returns an InvalidParameterException error.

  • :start_time (Integer)

    The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp before this time are not returned.

  • :end_time (Integer)

    The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not returned.

  • :filter_pattern (String)

    The filter pattern to use. For more information, see Filter and Pattern Syntax.

    If not provided, all the events are matched.

  • :next_token (String)

    The token for the next set of events to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of events to return. The default is 10,000 events.

  • :interleaved (Boolean)

    If the value is true, the operation attempts to provide responses that contain events from multiple log streams within the log group, interleaved in a single response. If the value is false, all the matched log events in the first log stream are searched first, then those in the next log stream, and so on.

    Important As of June 17, 2019, this parameter is ignored and the value is assumed to be true. The response from this operation always interleaves events from multiple log streams within a log group.

  • :unmask (Boolean)

    Specify true to display the log event fields with all sensitive data unmasked and visible. The default is false.

    To use this operation with this parameter, you must be signed into an account with the logs:Unmask permission.

Returns:

See Also:



2959
2960
2961
2962
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2959

def filter_log_events(params = {}, options = {})
  req = build_request(:filter_log_events, params)
  req.send_request(options)
end

#get_data_protection_policy(params = {}) ⇒ Types::GetDataProtectionPolicyResponse

Returns information about a log group data protection policy.

Examples:

Request syntax with placeholder values


resp = client.get_data_protection_policy({
  log_group_identifier: "LogGroupIdentifier", # required
})

Response structure


resp.log_group_identifier #=> String
resp.policy_document #=> String
resp.last_updated_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    The name or ARN of the log group that contains the data protection policy that you want to see.

Returns:

See Also:



2992
2993
2994
2995
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2992

def get_data_protection_policy(params = {}, options = {})
  req = build_request(:get_data_protection_policy, params)
  req.send_request(options)
end

#get_delivery(params = {}) ⇒ Types::GetDeliveryResponse

Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

You need to specify the delivery id in this operation. You can find the IDs of the deliveries in your account with the DescribeDeliveries operation.

Examples:

Request syntax with placeholder values


resp = client.get_delivery({
  id: "DeliveryId", # required
})

Response structure


resp.delivery.id #=> String
resp.delivery.arn #=> String
resp.delivery.delivery_source_name #=> String
resp.delivery.delivery_destination_arn #=> String
resp.delivery.delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.delivery.record_fields #=> Array
resp.delivery.record_fields[0] #=> String
resp.delivery.field_delimiter #=> String
resp.delivery.s3_delivery_configuration.suffix_path #=> String
resp.delivery.s3_delivery_configuration.enable_hive_compatible_path #=> Boolean
resp.delivery.tags #=> Hash
resp.delivery.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the delivery that you want to retrieve.

Returns:

See Also:



3050
3051
3052
3053
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3050

def get_delivery(params = {}, options = {})
  req = build_request(:get_delivery, params)
  req.send_request(options)
end

#get_delivery_destination(params = {}) ⇒ Types::GetDeliveryDestinationResponse

Retrieves complete information about one delivery destination.

Examples:

Request syntax with placeholder values


resp = client.get_delivery_destination({
  name: "DeliveryDestinationName", # required
})

Response structure


resp.delivery_destination.name #=> String
resp.delivery_destination.arn #=> String
resp.delivery_destination.delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.delivery_destination.output_format #=> String, one of "json", "plain", "w3c", "raw", "parquet"
resp.delivery_destination.delivery_destination_configuration.destination_resource_arn #=> String
resp.delivery_destination.tags #=> Hash
resp.delivery_destination.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the delivery destination that you want to retrieve.

Returns:

See Also:



3084
3085
3086
3087
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3084

def get_delivery_destination(params = {}, options = {})
  req = build_request(:get_delivery_destination, params)
  req.send_request(options)
end

#get_delivery_destination_policy(params = {}) ⇒ Types::GetDeliveryDestinationPolicyResponse

Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy.

Examples:

Request syntax with placeholder values


resp = client.get_delivery_destination_policy({
  delivery_destination_name: "DeliveryDestinationName", # required
})

Response structure


resp.policy.delivery_destination_policy #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :delivery_destination_name (required, String)

    The name of the delivery destination that you want to retrieve the policy of.

Returns:

See Also:



3120
3121
3122
3123
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3120

def get_delivery_destination_policy(params = {}, options = {})
  req = build_request(:get_delivery_destination_policy, params)
  req.send_request(options)
end

#get_delivery_source(params = {}) ⇒ Types::GetDeliverySourceResponse

Retrieves complete information about one delivery source.

Examples:

Request syntax with placeholder values


resp = client.get_delivery_source({
  name: "DeliverySourceName", # required
})

Response structure


resp.delivery_source.name #=> String
resp.delivery_source.arn #=> String
resp.delivery_source.resource_arns #=> Array
resp.delivery_source.resource_arns[0] #=> String
resp.delivery_source.service #=> String
resp.delivery_source.log_type #=> String
resp.delivery_source.tags #=> Hash
resp.delivery_source.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the delivery source that you want to retrieve.

Returns:

See Also:



3155
3156
3157
3158
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3155

def get_delivery_source(params = {}, options = {})
  req = build_request(:get_delivery_source, params)
  req.send_request(options)
end

#get_integration(params = {}) ⇒ Types::GetIntegrationResponse

Returns information about one integration between CloudWatch Logs and OpenSearch Service.

Examples:

Request syntax with placeholder values


resp = client.get_integration({
  integration_name: "IntegrationName", # required
})

Response structure


resp.integration_name #=> String
resp.integration_type #=> String, one of "OPENSEARCH"
resp.integration_status #=> String, one of "PROVISIONING", "ACTIVE", "FAILED"
resp.integration_details.open_search_integration_details.data_source.data_source_name #=> String
resp.integration_details.open_search_integration_details.data_source.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.data_source.status.status_message #=> String
resp.integration_details.open_search_integration_details.application.application_endpoint #=> String
resp.integration_details.open_search_integration_details.application.application_arn #=> String
resp.integration_details.open_search_integration_details.application.application_id #=> String
resp.integration_details.open_search_integration_details.application.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.application.status.status_message #=> String
resp.integration_details.open_search_integration_details.collection.collection_endpoint #=> String
resp.integration_details.open_search_integration_details.collection.collection_arn #=> String
resp.integration_details.open_search_integration_details.collection.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.collection.status.status_message #=> String
resp.integration_details.open_search_integration_details.workspace.workspace_id #=> String
resp.integration_details.open_search_integration_details.workspace.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.workspace.status.status_message #=> String
resp.integration_details.open_search_integration_details.encryption_policy.policy_name #=> String
resp.integration_details.open_search_integration_details.encryption_policy.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.encryption_policy.status.status_message #=> String
resp.integration_details.open_search_integration_details.network_policy.policy_name #=> String
resp.integration_details.open_search_integration_details.network_policy.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.network_policy.status.status_message #=> String
resp.integration_details.open_search_integration_details.access_policy.policy_name #=> String
resp.integration_details.open_search_integration_details.access_policy.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.access_policy.status.status_message #=> String
resp.integration_details.open_search_integration_details.lifecycle_policy.policy_name #=> String
resp.integration_details.open_search_integration_details.lifecycle_policy.status.status #=> String, one of "ACTIVE", "NOT_FOUND", "ERROR"
resp.integration_details.open_search_integration_details.lifecycle_policy.status.status_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_name (required, String)

    The name of the integration that you want to find information about. To find the name of your integration, use ListIntegrations

Returns:

See Also:



3221
3222
3223
3224
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3221

def get_integration(params = {}, options = {})
  req = build_request(:get_integration, params)
  req.send_request(options)
end

#get_log_anomaly_detector(params = {}) ⇒ Types::GetLogAnomalyDetectorResponse

Retrieves information about the log anomaly detector that you specify.

Examples:

Request syntax with placeholder values


resp = client.get_log_anomaly_detector({
  anomaly_detector_arn: "AnomalyDetectorArn", # required
})

Response structure


resp.detector_name #=> String
resp.log_group_arn_list #=> Array
resp.log_group_arn_list[0] #=> String
resp.evaluation_frequency #=> String, one of "ONE_MIN", "FIVE_MIN", "TEN_MIN", "FIFTEEN_MIN", "THIRTY_MIN", "ONE_HOUR"
resp.filter_pattern #=> String
resp.anomaly_detector_status #=> String, one of "INITIALIZING", "TRAINING", "ANALYZING", "FAILED", "DELETED", "PAUSED"
resp.kms_key_id #=> String
resp.creation_time_stamp #=> Integer
resp.last_modified_time_stamp #=> Integer
resp.anomaly_visibility_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :anomaly_detector_arn (required, String)

    The ARN of the anomaly detector to retrieve information about. You can find the ARNs of log anomaly detectors in your account by using the ListLogAnomalyDetectors operation.

Returns:

See Also:



3272
3273
3274
3275
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3272

def get_log_anomaly_detector(params = {}, options = {})
  req = build_request(:get_log_anomaly_detector, params)
  req.send_request(options)
end

#get_log_events(params = {}) ⇒ Types::GetLogEventsResponse

Lists log events from the specified log stream. You can list all of the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call. This operation can return empty results while there are more log events available through the token.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_log_events({
  log_group_name: "LogGroupName",
  log_group_identifier: "LogGroupIdentifier",
  log_stream_name: "LogStreamName", # required
  start_time: 1,
  end_time: 1,
  next_token: "NextToken",
  limit: 1,
  start_from_head: false,
  unmask: false,
})

Response structure


resp.events #=> Array
resp.events[0].timestamp #=> Integer
resp.events[0].message #=> String
resp.events[0].ingestion_time #=> Integer
resp.next_forward_token #=> String
resp.next_backward_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_group_identifier (String)

    Specify either the name or ARN of the log group to view events from. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :log_stream_name (required, String)

    The name of the log stream.

  • :start_time (Integer)

    The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp equal to this time or later than this time are included. Events with a timestamp earlier than this time are not included.

  • :end_time (Integer)

    The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp equal to or later than this time are not included.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of log events returned. If you don't specify a limit, the default is as many log events as can fit in a response size of 1 MB (up to 10,000 log events).

  • :start_from_head (Boolean)

    If the value is true, the earliest log events are returned first. If the value is false, the latest log events are returned first. The default value is false.

    If you are using a previous nextForwardToken value as the nextToken in this operation, you must specify true for startFromHead.

  • :unmask (Boolean)

    Specify true to display the log event fields with all sensitive data unmasked and visible. The default is false.

    To use this operation with this parameter, you must be signed into an account with the logs:Unmask permission.

Returns:

See Also:



3391
3392
3393
3394
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3391

def get_log_events(params = {}, options = {})
  req = build_request(:get_log_events, params)
  req.send_request(options)
end

#get_log_group_fields(params = {}) ⇒ Types::GetLogGroupFieldsResponse

Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify.

You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must specify one of these parameters, but you can't specify both.

In the results, fields that start with @ are fields generated by CloudWatch Logs. For example, @timestamp is the timestamp of each log event. For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields.

The response results are sorted by the frequency percentage, starting with the highest percentage.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

Examples:

Request syntax with placeholder values


resp = client.get_log_group_fields({
  log_group_name: "LogGroupName",
  time: 1,
  log_group_identifier: "LogGroupIdentifier",
})

Response structure


resp.log_group_fields #=> Array
resp.log_group_fields[0].name #=> String
resp.log_group_fields[0].percent #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (String)

    The name of the log group to search.

    You must include either logGroupIdentifier or logGroupName, but not both.

  • :time (Integer)

    The time to set as the center of the query. If you specify time, the 8 minutes before and 8 minutes after this time are searched. If you omit time, the most recent 15 minutes up to the current time are searched.

    The time value is specified as epoch time, which is the number of seconds since January 1, 1970, 00:00:00 UTC.

  • :log_group_identifier (String)

    Specify either the name or ARN of the log group to view. If the log group is in a source account and you are using a monitoring account, you must specify the ARN.

    You must include either logGroupIdentifier or logGroupName, but not both.

Returns:

See Also:



3472
3473
3474
3475
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3472

def get_log_group_fields(params = {}, options = {})
  req = build_request(:get_log_group_fields, params)
  req.send_request(options)
end

#get_log_record(params = {}) ⇒ Types::GetLogRecordResponse

Retrieves all of the fields and values of a single log event. All fields are retrieved, even if the original query that produced the logRecordPointer retrieved only a subset of fields. Fields are returned as field name/field value pairs.

The full unparsed log event is returned within @message.

Examples:

Request syntax with placeholder values


resp = client.get_log_record({
  log_record_pointer: "LogRecordPointer", # required
  unmask: false,
})

Response structure


resp.log_record #=> Hash
resp.log_record["Field"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_record_pointer (required, String)

    The pointer corresponding to the log event record you want to retrieve. You get this from the response of a GetQueryResults operation. In that response, the value of the @ptr field for a log event is the value to use as logRecordPointer to retrieve that complete log event record.

  • :unmask (Boolean)

    Specify true to display the log event fields with all sensitive data unmasked and visible. The default is false.

    To use this operation with this parameter, you must be signed into an account with the logs:Unmask permission.

Returns:

See Also:



3518
3519
3520
3521
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3518

def get_log_record(params = {}, options = {})
  req = build_request(:get_log_record, params)
  req.send_request(options)
end

#get_query_results(params = {}) ⇒ Types::GetQueryResultsResponse

Returns the results from the specified query.

Only the fields requested in the query are returned, along with a @ptr field, which is the identifier for the log record. You can use the value of @ptr in a GetLogRecord operation to get the full log record.

GetQueryResults does not start running a query. To run a query, use StartQuery. For more information about how long results of previous queries are available, see CloudWatch Logs quotas.

If the value of the Status field in the output is Running, this operation returns only partial results. If you see a value of Scheduled or Running for the status, you can retry the operation later to see the final results.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross-account observability.

Examples:

Request syntax with placeholder values


resp = client.get_query_results({
  query_id: "QueryId", # required
})

Response structure


resp.query_language #=> String, one of "CWLI", "SQL", "PPL"
resp.results #=> Array
resp.results[0] #=> Array
resp.results[0][0].field #=> String
resp.results[0][0].value #=> String
resp.statistics.records_matched #=> Float
resp.statistics.records_scanned #=> Float
resp.statistics.estimated_records_skipped #=> Float
resp.statistics.bytes_scanned #=> Float
resp.statistics.estimated_bytes_skipped #=> Float
resp.statistics.log_groups_scanned #=> Float
resp.status #=> String, one of "Scheduled", "Running", "Complete", "Failed", "Cancelled", "Timeout", "Unknown"
resp.encryption_key #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_id (required, String)

    The ID number of the query.

Returns:

See Also:



3588
3589
3590
3591
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3588

def get_query_results(params = {}, options = {})
  req = build_request(:get_query_results, params)
  req.send_request(options)
end

#get_transformer(params = {}) ⇒ Types::GetTransformerResponse

Returns the information about the log transformer associated with this log group.

This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies.

Examples:

Request syntax with placeholder values


resp = client.get_transformer({
  log_group_identifier: "LogGroupIdentifier", # required
})

Response structure


resp.log_group_identifier #=> String
resp.creation_time #=> Integer
resp.last_modified_time #=> Integer
resp.transformer_config #=> Array
resp.transformer_config[0].add_keys.entries #=> Array
resp.transformer_config[0].add_keys.entries[0].key #=> String
resp.transformer_config[0].add_keys.entries[0].value #=> String
resp.transformer_config[0].add_keys.entries[0].overwrite_if_exists #=> Boolean
resp.transformer_config[0].copy_value.entries #=> Array
resp.transformer_config[0].copy_value.entries[0].source #=> String
resp.transformer_config[0].copy_value.entries[0].target #=> String
resp.transformer_config[0].copy_value.entries[0].overwrite_if_exists #=> Boolean
resp.transformer_config[0].csv.quote_character #=> String
resp.transformer_config[0].csv.delimiter #=> String
resp.transformer_config[0].csv.columns #=> Array
resp.transformer_config[0].csv.columns[0] #=> String
resp.transformer_config[0].csv.source #=> String
resp.transformer_config[0].date_time_converter.source #=> String
resp.transformer_config[0].date_time_converter.target #=> String
resp.transformer_config[0].date_time_converter.target_format #=> String
resp.transformer_config[0].date_time_converter.match_patterns #=> Array
resp.transformer_config[0].date_time_converter.match_patterns[0] #=> String
resp.transformer_config[0].date_time_converter.source_timezone #=> String
resp.transformer_config[0].date_time_converter.target_timezone #=> String
resp.transformer_config[0].date_time_converter.locale #=> String
resp.transformer_config[0].delete_keys.with_keys #=> Array
resp.transformer_config[0].delete_keys.with_keys[0] #=> String
resp.transformer_config[0].grok.source #=> String
resp.transformer_config[0].grok.match #=> String
resp.transformer_config[0].list_to_map.source #=> String
resp.transformer_config[0].list_to_map.key #=> String
resp.transformer_config[0].list_to_map.value_key #=> String
resp.transformer_config[0].list_to_map.target #=> String
resp.transformer_config[0].list_to_map.flatten #=> Boolean
resp.transformer_config[0].list_to_map.flattened_element #=> String, one of "first", "last"
resp.transformer_config[0].lower_case_string.with_keys #=> Array
resp.transformer_config[0].lower_case_string.with_keys[0] #=> String
resp.transformer_config[0].move_keys.entries #=> Array
resp.transformer_config[0].move_keys.entries[0].source #=> String
resp.transformer_config[0].move_keys.entries[0].target #=> String
resp.transformer_config[0].move_keys.entries[0].overwrite_if_exists #=> Boolean
resp.transformer_config[0].parse_cloudfront.source #=> String
resp.transformer_config[0].parse_json.source #=> String
resp.transformer_config[0].parse_json.destination #=> String
resp.transformer_config[0].parse_key_value.source #=> String
resp.transformer_config[0].parse_key_value.destination #=> String
resp.transformer_config[0].parse_key_value.field_delimiter #=> String
resp.transformer_config[0].parse_key_value.key_value_delimiter #=> String
resp.transformer_config[0].parse_key_value.key_prefix #=> String
resp.transformer_config[0].parse_key_value.non_match_value #=> String
resp.transformer_config[0].parse_key_value.overwrite_if_exists #=> Boolean
resp.transformer_config[0].parse_route_53.source #=> String
resp.transformer_config[0].parse_postgres.source #=> String
resp.transformer_config[0].parse_vpc.source #=> String
resp.transformer_config[0].parse_waf.source #=> String
resp.transformer_config[0].rename_keys.entries #=> Array
resp.transformer_config[0].rename_keys.entries[0].key #=> String
resp.transformer_config[0].rename_keys.entries[0].rename_to #=> String
resp.transformer_config[0].rename_keys.entries[0].overwrite_if_exists #=> Boolean
resp.transformer_config[0].split_string.entries #=> Array
resp.transformer_config[0].split_string.entries[0].source #=> String
resp.transformer_config[0].split_string.entries[0].delimiter #=> String
resp.transformer_config[0].substitute_string.entries #=> Array
resp.transformer_config[0].substitute_string.entries[0].source #=> String
resp.transformer_config[0].substitute_string.entries[0].from #=> String
resp.transformer_config[0].substitute_string.entries[0].to #=> String
resp.transformer_config[0].trim_string.with_keys #=> Array
resp.transformer_config[0].trim_string.with_keys[0] #=> String
resp.transformer_config[0].type_converter.entries #=> Array
resp.transformer_config[0].type_converter.entries[0].key #=> String
resp.transformer_config[0].type_converter.entries[0].type #=> String, one of "boolean", "integer", "double", "string"
resp.transformer_config[0].upper_case_string.with_keys #=> Array
resp.transformer_config[0].upper_case_string.with_keys[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    Specify either the name or ARN of the log group to return transformer information for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.

Returns:

See Also:



3702
3703
3704
3705
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3702

def get_transformer(params = {}, options = {})
  req = build_request(:get_transformer, params)
  req.send_request(options)
end

#list_anomalies(params = {}) ⇒ Types::ListAnomaliesResponse

Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_anomalies({
  anomaly_detector_arn: "AnomalyDetectorArn",
  suppression_state: "SUPPRESSED", # accepts SUPPRESSED, UNSUPPRESSED
  limit: 1,
  next_token: "NextToken",
})

Response structure


resp.anomalies #=> Array
resp.anomalies[0].anomaly_id #=> String
resp.anomalies[0].pattern_id #=> String
resp.anomalies[0].anomaly_detector_arn #=> String
resp.anomalies[0].pattern_string #=> String
resp.anomalies[0].pattern_regex #=> String
resp.anomalies[0].priority #=> String
resp.anomalies[0].first_seen #=> Integer
resp.anomalies[0].last_seen #=> Integer
resp.anomalies[0].description #=> String
resp.anomalies[0].active #=> Boolean
resp.anomalies[0].state #=> String, one of "Active", "Suppressed", "Baseline"
resp.anomalies[0].histogram #=> Hash
resp.anomalies[0].histogram["Time"] #=> Integer
resp.anomalies[0].log_samples #=> Array
resp.anomalies[0].log_samples[0].timestamp #=> Integer
resp.anomalies[0].log_samples[0].message #=> String
resp.anomalies[0].pattern_tokens #=> Array
resp.anomalies[0].pattern_tokens[0].dynamic_token_position #=> Integer
resp.anomalies[0].pattern_tokens[0].is_dynamic #=> Boolean
resp.anomalies[0].pattern_tokens[0].token_string #=> String
resp.anomalies[0].pattern_tokens[0].enumerations #=> Hash
resp.anomalies[0].pattern_tokens[0].enumerations["TokenString"] #=> Integer
resp.anomalies[0].pattern_tokens[0].inferred_token_name #=> String
resp.anomalies[0].log_group_arn_list #=> Array
resp.anomalies[0].log_group_arn_list[0] #=> String
resp.anomalies[0].suppressed #=> Boolean
resp.anomalies[0].suppressed_date #=> Integer
resp.anomalies[0].suppressed_until #=> Integer
resp.anomalies[0].is_pattern_level_suppression #=> Boolean
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :anomaly_detector_arn (String)

    Use this to optionally limit the results to only the anomalies found by a certain anomaly detector.

  • :suppression_state (String)

    You can specify this parameter if you want to the operation to return only anomalies that are currently either suppressed or unsuppressed.

  • :limit (Integer)

    The maximum number of items to return. If you don't specify a value, the default maximum value of 50 items is used.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:



3781
3782
3783
3784
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3781

def list_anomalies(params = {}, options = {})
  req = build_request(:list_anomalies, params)
  req.send_request(options)
end

#list_integrations(params = {}) ⇒ Types::ListIntegrationsResponse

Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service.

Examples:

Request syntax with placeholder values


resp = client.list_integrations({
  integration_name_prefix: "IntegrationNamePrefix",
  integration_type: "OPENSEARCH", # accepts OPENSEARCH
  integration_status: "PROVISIONING", # accepts PROVISIONING, ACTIVE, FAILED
})

Response structure


resp.integration_summaries #=> Array
resp.integration_summaries[0].integration_name #=> String
resp.integration_summaries[0].integration_type #=> String, one of "OPENSEARCH"
resp.integration_summaries[0].integration_status #=> String, one of "PROVISIONING", "ACTIVE", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_name_prefix (String)

    To limit the results to integrations that start with a certain name prefix, specify that name prefix here.

  • :integration_type (String)

    To limit the results to integrations of a certain type, specify that type here.

  • :integration_status (String)

    To limit the results to integrations with a certain status, specify that status here.

Returns:

See Also:



3826
3827
3828
3829
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3826

def list_integrations(params = {}, options = {})
  req = build_request(:list_integrations, params)
  req.send_request(options)
end

#list_log_anomaly_detectors(params = {}) ⇒ Types::ListLogAnomalyDetectorsResponse

Retrieves a list of the log anomaly detectors in the account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_log_anomaly_detectors({
  filter_log_group_arn: "LogGroupArn",
  limit: 1,
  next_token: "NextToken",
})

Response structure


resp.anomaly_detectors #=> Array
resp.anomaly_detectors[0].anomaly_detector_arn #=> String
resp.anomaly_detectors[0].detector_name #=> String
resp.anomaly_detectors[0].log_group_arn_list #=> Array
resp.anomaly_detectors[0].log_group_arn_list[0] #=> String
resp.anomaly_detectors[0].evaluation_frequency #=> String, one of "ONE_MIN", "FIVE_MIN", "TEN_MIN", "FIFTEEN_MIN", "THIRTY_MIN", "ONE_HOUR"
resp.anomaly_detectors[0].filter_pattern #=> String
resp.anomaly_detectors[0].anomaly_detector_status #=> String, one of "INITIALIZING", "TRAINING", "ANALYZING", "FAILED", "DELETED", "PAUSED"
resp.anomaly_detectors[0].kms_key_id #=> String
resp.anomaly_detectors[0].creation_time_stamp #=> Integer
resp.anomaly_detectors[0].last_modified_time_stamp #=> Integer
resp.anomaly_detectors[0].anomaly_visibility_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_log_group_arn (String)

    Use this to optionally filter the results to only include anomaly detectors that are associated with the specified log group.

  • :limit (Integer)

    The maximum number of items to return. If you don't specify a value, the default maximum value of 50 items is used.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:



3880
3881
3882
3883
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3880

def list_log_anomaly_detectors(params = {}, options = {})
  req = build_request(:list_log_anomaly_detectors, params)
  req.send_request(options)
end

#list_log_groups_for_query(params = {}) ⇒ Types::ListLogGroupsForQueryResponse

Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query. This can be useful for queries that use log group name prefixes or the filterIndex command, because the log groups are dynamically selected in these cases.

For more information about field indexes, see Create field indexes to improve query performance and reduce costs.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_log_groups_for_query({
  query_id: "QueryId", # required
  next_token: "NextToken",
  max_results: 1,
})

Response structure


resp.log_group_identifiers #=> Array
resp.log_group_identifiers[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_id (required, String)

    The ID of the query to use. This query ID is from the response to your StartQuery operation.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :max_results (Integer)

    Limits the number of returned log groups to the specified number.

Returns:

See Also:



3937
3938
3939
3940
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3937

def list_log_groups_for_query(params = {}, options = {})
  req = build_request(:list_log_groups_for_query, params)
  req.send_request(options)
end

#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse

Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "AmazonResourceName", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The ARN of the resource that you want to view tags for.

    The ARN format of a log group is arn:aws:logs:Region:account-id:log-group:log-group-name

    The ARN format of a destination is arn:aws:logs:Region:account-id:destination:destination-name

    For more information about ARN format, see CloudWatch Logs resources and operations.

Returns:

See Also:



3980
3981
3982
3983
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3980

def list_tags_for_resource(params = {}, options = {})
  req = build_request(:list_tags_for_resource, params)
  req.send_request(options)
end

#list_tags_log_group(params = {}) ⇒ Types::ListTagsLogGroupResponse

The ListTagsLogGroup operation is on the path to deprecation. We recommend that you use ListTagsForResource instead.

Lists the tags for the specified log group.

Examples:

Request syntax with placeholder values


resp = client.list_tags_log_group({
  log_group_name: "LogGroupName", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

See Also:



4016
4017
4018
4019
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4016

def list_tags_log_group(params = {}, options = {})
  req = build_request(:list_tags_log_group, params)
  req.send_request(options)
end

#put_account_policy(params = {}) ⇒ Types::PutAccountPolicyResponse

Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.

Data protection policy

A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.

Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.

If you use PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.

By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.

For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.

To use the PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.

The PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.

Subscription filter policy

A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

  • An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.

  • An Firehose data stream in the same account as the subscription policy, for same-account delivery.

  • A Lambda function in the same account as the subscription policy, for same-account delivery.

  • A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.

Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

Transformer policy

Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.

You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.

A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use.

Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.

You can create transformers only for the log groups in the Standard log class.

You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the selectionCriteria parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.

You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.

Field index policy

You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs

To find the fields that are in your log group events, use the GetLogGroupFields operation.

For example, suppose you have created a field index for requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId in [value, value, ...] will attempt to process only the log events where the indexed field matches the specified value.

Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of RequestId won't match a log event containing requestId.

You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the selectionCriteria parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.

If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.

If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of PutAccountPolicy. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with PutAccountPolicy.

Examples:

Request syntax with placeholder values


resp = client.({
  policy_name: "PolicyName", # required
  policy_document: "AccountPolicyDocument", # required
  policy_type: "DATA_PROTECTION_POLICY", # required, accepts DATA_PROTECTION_POLICY, SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY, TRANSFORMER_POLICY
  scope: "ALL", # accepts ALL
  selection_criteria: "SelectionCriteria",
})

Response structure


resp..policy_name #=> String
resp..policy_document #=> String
resp..last_updated_time #=> Integer
resp..policy_type #=> String, one of "DATA_PROTECTION_POLICY", "SUBSCRIPTION_FILTER_POLICY", "FIELD_INDEX_POLICY", "TRANSFORMER_POLICY"
resp..scope #=> String, one of "ALL"
resp..selection_criteria #=> String
resp.. #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_name (required, String)

    A name for the policy. This must be unique within the account.

  • :policy_document (required, String)

    Specify the policy, in JSON.

    Data protection policy

    A data protection policy must include two JSON blocks:

    • The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.

      The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.

    • The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.

      The Operation property with the Deidentify action is what actually masks the data, and it must contain the "MaskConfig": {} object. The "MaskConfig": {} object must be empty.

    For an example data protection policy, see the Examples section on this page.

    The contents of the two DataIdentifer arrays must match exactly.

    In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

    The JSON specified in policyDocument can be up to 30,720 characters long.

    Subscription filter policy

    A subscription filter policy can include the following attributes in a JSON block:

    • DestinationArn The ARN of the destination to deliver log events to. Supported destinations are:

      • An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.

      • An Firehose data stream in the same account as the subscription policy, for same-account delivery.

      • A Lambda function in the same account as the subscription policy, for same-account delivery.

      • A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.

    • RoleArn The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery.

    • FilterPattern A filter pattern for subscribing to a filtered stream of log events.

    • Distribution The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to Random for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream.

    Transformer policy

    A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use.

    Field index policy

    A field index filter policy can include the following attribute in a JSON block:

    • Fields The array of field indexes to create.

    ^

    It must contain at least one field index.

    The following is an example of an index policy document that creates two indexes, RequestId and TransactionId.

    "policyDocument": "{ "Fields": [ "RequestId", "TransactionId" ] }"

  • :policy_type (required, String)

    The type of policy that you're creating or updating.

  • :scope (String)

    Currently the only valid value for this parameter is ALL, which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of ALL is used.

  • :selection_criteria (String)

    Use this parameter to apply the new policy to a subset of log groups in the account.

    Specifing selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICYfor policyType.

    If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN []

    If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix

    The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.

    Using the selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.

Returns:

See Also:



4376
4377
4378
4379
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4376

def (params = {}, options = {})
  req = build_request(:put_account_policy, params)
  req.send_request(options)
end

#put_data_protection_policy(params = {}) ⇒ Types::PutDataProtectionPolicyResponse

Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data.

Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.

By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.

For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.

The PutDataProtectionPolicy operation applies to only the specified log group. You can also use PutAccountPolicy to create an account-level data protection policy that applies to all log groups in the account, including both existing log groups and log groups that are created level. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.

Examples:

Request syntax with placeholder values


resp = client.put_data_protection_policy({
  log_group_identifier: "LogGroupIdentifier", # required
  policy_document: "DataProtectionPolicyDocument", # required
})

Response structure


resp.log_group_identifier #=> String
resp.policy_document #=> String
resp.last_updated_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    Specify either the log group name or log group ARN.

  • :policy_document (required, String)

    Specify the data protection policy, in JSON.

    This policy must include two JSON blocks:

    • The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.

      The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.

    • The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.

      The Operation property with the Deidentify action is what actually masks the data, and it must contain the "MaskConfig": {} object. The "MaskConfig": {} object must be empty.

    For an example data protection policy, see the Examples section on this page.

    The contents of the two DataIdentifer arrays must match exactly.

    In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

    The JSON specified in policyDocument can be up to 30,720 characters.

Returns:

See Also:



4485
4486
4487
4488
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4485

def put_data_protection_policy(params = {}, options = {})
  req = build_request(:put_data_protection_policy, params)
  req.send_request(options)
end

#put_delivery_destination(params = {}) ⇒ Types::PutDeliveryDestinationResponse

Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

  • Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.

  • Use PutDeliveryDestination to create a delivery destination, which is a logical object that represents the actual delivery destination.

  • If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.

  • Use CreateDelivery to create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify.

Examples:

Request syntax with placeholder values


resp = client.put_delivery_destination({
  name: "DeliveryDestinationName", # required
  output_format: "json", # accepts json, plain, w3c, raw, parquet
  delivery_destination_configuration: { # required
    destination_resource_arn: "Arn", # required
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.delivery_destination.name #=> String
resp.delivery_destination.arn #=> String
resp.delivery_destination.delivery_destination_type #=> String, one of "S3", "CWL", "FH"
resp.delivery_destination.output_format #=> String, one of "json", "plain", "w3c", "raw", "parquet"
resp.delivery_destination.delivery_destination_configuration.destination_resource_arn #=> String
resp.delivery_destination.tags #=> Hash
resp.delivery_destination.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    A name for this delivery destination. This name must be unique for all delivery destinations in your account.

  • :output_format (String)

    The format for the logs that this delivery destination will receive.

  • :delivery_destination_configuration (required, Types::DeliveryDestinationConfiguration)

    A structure that contains the ARN of the Amazon Web Services resource that will receive the logs.

  • :tags (Hash<String,String>)

    An optional list of key-value pairs to associate with the resource.

    For more information about tagging, see Tagging Amazon Web Services resources

Returns:

See Also:



4588
4589
4590
4591
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4588

def put_delivery_destination(params = {}, options = {})
  req = build_request(:put_delivery_destination, params)
  req.send_request(options)
end

#put_delivery_destination_policy(params = {}) ⇒ Types::PutDeliveryDestinationPolicyResponse

Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:

  • Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.

  • Create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.

  • Use this operation in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.

  • Create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies.

Examples:

Request syntax with placeholder values


resp = client.put_delivery_destination_policy({
  delivery_destination_name: "DeliveryDestinationName", # required
  delivery_destination_policy: "DeliveryDestinationPolicy", # required
})

Response structure


resp.policy.delivery_destination_policy #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :delivery_destination_name (required, String)

    The name of the delivery destination to assign this policy to.

  • :delivery_destination_policy (required, String)

    The contents of the policy.

Returns:

See Also:



4655
4656
4657
4658
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4655

def put_delivery_destination_policy(params = {}, options = {})
  req = build_request(:put_delivery_destination_policy, params)
  req.send_request(options)
end

#put_delivery_source(params = {}) ⇒ Types::PutDeliverySourceResponse

Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.

To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:

  • Use PutDeliverySource to create a delivery source, which is a logical object that represents the resource that is actually sending the logs.

  • Use PutDeliveryDestination to create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.

  • If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.

  • Use CreateDelivery to create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify.

Examples:

Request syntax with placeholder values


resp = client.put_delivery_source({
  name: "DeliverySourceName", # required
  resource_arn: "Arn", # required
  log_type: "LogType", # required
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.delivery_source.name #=> String
resp.delivery_source.arn #=> String
resp.delivery_source.resource_arns #=> Array
resp.delivery_source.resource_arns[0] #=> String
resp.delivery_source.service #=> String
resp.delivery_source.log_type #=> String
resp.delivery_source.tags #=> Hash
resp.delivery_source.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    A name for this delivery source. This name must be unique for all delivery sources in your account.

  • :resource_arn (required, String)

    The ARN of the Amazon Web Services resource that is generating and sending logs. For example, arn:aws:workmail:us-east-1:123456789012:organization/m-1234EXAMPLEabcd1234abcd1234abcd1234

  • :log_type (required, String)

    Defines the type of log that the source is sending.

    • For Amazon Bedrock, the valid value is APPLICATION_LOGS.

    • For Amazon CodeWhisperer, the valid value is EVENT_LOGS.

    • For IAM Identity Center, the valid value is ERROR_LOGS.

    • For Amazon WorkMail, the valid values are ACCESS_CONTROL_LOGS, AUTHENTICATION_LOGS, WORKMAIL_AVAILABILITY_PROVIDER_LOGS, and WORKMAIL_MAILBOX_ACCESS_LOGS.

  • :tags (Hash<String,String>)

    An optional list of key-value pairs to associate with the resource.

    For more information about tagging, see Tagging Amazon Web Services resources

Returns:

See Also:



4769
4770
4771
4772
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4769

def put_delivery_source(params = {}, options = {})
  req = build_request(:put_delivery_source, params)
  req.send_request(options)
end

#put_destination(params = {}) ⇒ Types::PutDestinationResponse

Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.

A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.

Through an access policy, a destination controls what is written to it. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

To perform a PutDestination operation, you must also have the iam:PassRole permission.

Examples:

Request syntax with placeholder values


resp = client.put_destination({
  destination_name: "DestinationName", # required
  target_arn: "TargetArn", # required
  role_arn: "RoleArn", # required
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.destination.destination_name #=> String
resp.destination.target_arn #=> String
resp.destination.role_arn #=> String
resp.destination.access_policy #=> String
resp.destination.arn #=> String
resp.destination.creation_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :destination_name (required, String)

    A name for the destination.

  • :target_arn (required, String)

    The ARN of an Amazon Kinesis stream to which to deliver matching log events.

  • :role_arn (required, String)

    The ARN of an IAM role that grants CloudWatch Logs permissions to call the Amazon Kinesis PutRecord operation on the destination stream.

  • :tags (Hash<String,String>)

    An optional list of key-value pairs to associate with the resource.

    For more information about tagging, see Tagging Amazon Web Services resources

Returns:

See Also:



4847
4848
4849
4850
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4847

def put_destination(params = {}, options = {})
  req = build_request(:put_destination, params)
  req.send_request(options)
end

#put_destination_policy(params = {}) ⇒ Struct

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.

Examples:

Request syntax with placeholder values


resp = client.put_destination_policy({
  destination_name: "DestinationName", # required
  access_policy: "AccessPolicy", # required
  force_update: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :destination_name (required, String)

    A name for an existing destination.

  • :access_policy (required, String)

    An IAM policy document that authorizes cross-account users to deliver their log events to the associated destination. This can be up to 5120 bytes.

  • :force_update (Boolean)

    Specify true if you are updating an existing destination policy to grant permission to an organization ID instead of granting permission to individual Amazon Web Services accounts. Before you update a destination policy this way, you must first update the subscription filters in the accounts that send logs to this destination. If you do not, the subscription filters might stop working. By specifying true for forceUpdate, you are affirming that you have already updated the subscription filters. For more information, see Updating an existing cross-account subscription

    If you omit this parameter, the default of false is used.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



4900
4901
4902
4903
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4900

def put_destination_policy(params = {}, options = {})
  req = build_request(:put_destination_policy, params)
  req.send_request(options)
end

#put_index_policy(params = {}) ⇒ Types::PutIndexPolicyResponse

Creates or updates a field index policy for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes.

You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs.

To find the fields that are in your log group events, use the GetLogGroupFields operation.

For example, suppose you have created a field index for requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId IN [value, value, ...] will process fewer log events to reduce costs, and have improved performance.

Each index policy has the following quotas and restrictions:

  • As many as 20 fields can be included in the policy.

  • Each field name can include as many as 100 characters.

Matches of log events to the names of indexed fields are case-sensitive. For example, a field index of RequestId won't match a log event containing requestId.

Log group-level field index policies created with PutIndexPolicy override account-level field index policies created with PutAccountPolicy. If you use PutIndexPolicy to create a field index policy for a log group, that log group uses only that policy. The log group ignores any account-wide field index policy that you might have created.

Examples:

Request syntax with placeholder values


resp = client.put_index_policy({
  log_group_identifier: "LogGroupIdentifier", # required
  policy_document: "PolicyDocument", # required
})

Response structure


resp.index_policy.log_group_identifier #=> String
resp.index_policy.last_update_time #=> Integer
resp.index_policy.policy_document #=> String
resp.index_policy.policy_name #=> String
resp.index_policy.source #=> String, one of "ACCOUNT", "LOG_GROUP"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    Specify either the log group name or log group ARN to apply this field index policy to. If you specify an ARN, use the format arn:aws:logs:region:account-id:log-group:log_group_name Don't include an * at the end.

  • :policy_document (required, String)

    The index policy document, in JSON format. The following is an example of an index policy document that creates two indexes, RequestId and TransactionId.

    "policyDocument": "{ "Fields": [ "RequestId", "TransactionId" ] }"

    The policy document must include at least one field index. For more information about the fields that can be included and other restrictions, see Field index syntax and quotas.

Returns:

See Also:



4998
4999
5000
5001
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4998

def put_index_policy(params = {}, options = {})
  req = build_request(:put_index_policy, params)
  req.send_request(options)
end

#put_integration(params = {}) ⇒ Types::PutIntegrationResponse

Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account.

Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see Vended log dashboards powered by Amazon OpenSearch Service.

You can use this operation only to create a new integration. You can't modify an existing integration.

Examples:

Request syntax with placeholder values


resp = client.put_integration({
  integration_name: "IntegrationName", # required
  resource_config: { # required
    open_search_resource_config: {
      kms_key_arn: "Arn",
      data_source_role_arn: "Arn", # required
      dashboard_viewer_principals: ["Arn"], # required
      application_arn: "Arn",
      retention_days: 1, # required
    },
  },
  integration_type: "OPENSEARCH", # required, accepts OPENSEARCH
})

Response structure


resp.integration_name #=> String
resp.integration_status #=> String, one of "PROVISIONING", "ACTIVE", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_name (required, String)

    A name for the integration.

  • :resource_config (required, Types::ResourceConfig)

    A structure that contains configuration information for the integration that you are creating.

  • :integration_type (required, String)

    The type of integration. Currently, the only supported type is OPENSEARCH.

Returns:

See Also:



5061
5062
5063
5064
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5061

def put_integration(params = {}, options = {})
  req = build_request(:put_integration, params)
  req.send_request(options)
end

#put_log_events(params = {}) ⇒ Types::PutLogEventsResponse

Uploads a batch of log events to the specified log stream.

The sequence token is now ignored in PutLogEvents actions. PutLogEvents actions are always accepted and never return InvalidSequenceTokenException or DataAlreadyAcceptedException even if the sequence token is not valid. You can use parallel PutLogEvents actions on the same log stream.

The batch of events must satisfy the following constraints:

  • The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.

  • None of the log events in the batch can be more than 2 hours in the future.

  • None of the log events in the batch can be more than 14 days in the past. Also, none of the log events can be from earlier than the retention period of the log group.

  • The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)

  • A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.

  • Each log event can be no larger than 256 KB.

  • The maximum number of log events in a batch is 10,000.

  • The quota of five requests per second per log stream has been removed. Instead, PutLogEvents actions are throttled based on a per-second per-account quota. You can request an increase to the per-second throttling quota by using the Service Quotas service.

If a call to PutLogEvents returns "UnrecognizedClientException" the most likely cause is a non-valid Amazon Web Services access key ID or secret key.

Examples:

Request syntax with placeholder values


resp = client.put_log_events({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
  log_events: [ # required
    {
      timestamp: 1, # required
      message: "EventMessage", # required
    },
  ],
  sequence_token: "SequenceToken",
  entity: {
    key_attributes: {
      "EntityKeyAttributesKey" => "EntityKeyAttributesValue",
    },
    attributes: {
      "EntityAttributesKey" => "EntityAttributesValue",
    },
  },
})

Response structure


resp.next_sequence_token #=> String
resp.rejected_log_events_info.too_new_log_event_start_index #=> Integer
resp.rejected_log_events_info.too_old_log_event_end_index #=> Integer
resp.rejected_log_events_info.expired_log_event_end_index #=> Integer
resp.rejected_entity_info.error_type #=> String, one of "InvalidEntity", "InvalidTypeValue", "InvalidKeyAttributes", "InvalidAttributes", "EntitySizeTooLarge", "UnsupportedLogGroupType", "MissingRequiredFields"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

  • :log_events (required, Array<Types::InputLogEvent>)

    The log events.

  • :sequence_token (String)

    The sequence token obtained from the response of the previous PutLogEvents call.

    The sequenceToken parameter is now ignored in PutLogEvents actions. PutLogEvents actions are now accepted and never return InvalidSequenceTokenException or DataAlreadyAcceptedException even if the sequence token is not valid.

  • :entity (Types::Entity)

    The entity associated with the log events.

Returns:

See Also:



5171
5172
5173
5174
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5171

def put_log_events(params = {}, options = {})
  req = build_request(:put_log_events, params)
  req.send_request(options)
end

#put_metric_filter(params = {}) ⇒ Struct

Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents.

The maximum number of metric filters that can be associated with a log group is 100.

Using regular expressions to create metric filters is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in metric filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.

When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.

Metrics extracted from log events are charged as custom metrics. To prevent unexpected high charges, do not specify high-cardinality fields such as IPAddress or requestID as dimensions. Each different value found for a dimension is treated as a separate metric and accrues charges as a separate custom metric.

CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for your specified dimensions within one hour.

You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges.

Examples:

Request syntax with placeholder values


resp = client.put_metric_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
  filter_pattern: "FilterPattern", # required
  metric_transformations: [ # required
    {
      metric_name: "MetricName", # required
      metric_namespace: "MetricNamespace", # required
      metric_value: "MetricValue", # required
      default_value: 1.0,
      dimensions: {
        "DimensionsKey" => "DimensionsValue",
      },
      unit: "Seconds", # accepts Seconds, Microseconds, Milliseconds, Bytes, Kilobytes, Megabytes, Gigabytes, Terabytes, Bits, Kilobits, Megabits, Gigabits, Terabits, Percent, Count, Bytes/Second, Kilobytes/Second, Megabytes/Second, Gigabytes/Second, Terabytes/Second, Bits/Second, Kilobits/Second, Megabits/Second, Gigabits/Second, Terabits/Second, Count/Second, None
    },
  ],
  apply_on_transformed_logs: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    A name for the metric filter.

  • :filter_pattern (required, String)

    A filter pattern for extracting metric data out of ingested log events.

  • :metric_transformations (required, Array<Types::MetricTransformation>)

    A collection of information that defines how metric data gets emitted.

  • :apply_on_transformed_logs (Boolean)

    This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.

    If the log group uses either a log-group level or account-level transformer, and you specify true, the metric filter will be applied on the transformed version of the log events instead of the original ingested log events.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



5269
5270
5271
5272
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5269

def put_metric_filter(params = {}, options = {})
  req = build_request(:put_metric_filter, params)
  req.send_request(options)
end

#put_query_definition(params = {}) ⇒ Types::PutQueryDefinitionResponse

Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.

To update a query definition, specify its queryDefinitionId in your request. The values of name, queryString, and logGroupNames are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, imagine updating a current query definition that includes log groups. If you don't specify the logGroupNames parameter in your update operation, the query definition changes to contain no log groups.

You must have the logs:PutQueryDefinition permission to be able to perform this operation.

Examples:

Request syntax with placeholder values


resp = client.put_query_definition({
  query_language: "CWLI", # accepts CWLI, SQL, PPL
  name: "QueryDefinitionName", # required
  query_definition_id: "QueryId",
  log_group_names: ["LogGroupName"],
  query_string: "QueryDefinitionString", # required
  client_token: "ClientToken",
})

Response structure


resp.query_definition_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_language (String)

    Specify the query language to use for this query. The options are Logs Insights QL, OpenSearch PPL, and OpenSearch SQL. For more information about the query languages that CloudWatch Logs supports, see Supported query languages.

  • :name (required, String)

    A name for the query definition. If you are saving numerous query definitions, we recommend that you name them. This way, you can find the ones you want by using the first part of the name as a filter in the queryDefinitionNamePrefix parameter of DescribeQueryDefinitions.

  • :query_definition_id (String)

    If you are updating a query definition, use this parameter to specify the ID of the query definition that you want to update. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions.

    If you are creating a query definition, do not specify this parameter. CloudWatch generates a unique ID for the new query definition and include it in the response to this operation.

  • :log_group_names (Array<String>)

    Use this parameter to include specific log groups as part of your query definition. If your query uses the OpenSearch Service query language, you specify the log group names inside the querystring instead of here.

    If you are updating an existing query definition for the Logs Insights QL or OpenSearch Service PPL and you omit this parameter, then the updated definition will contain no log groups.

  • :query_string (required, String)

    The query string to use for this definition. For more information, see CloudWatch Logs Insights Query Syntax.

  • :client_token (String)

    Used as an idempotency token, to avoid returning an exception if the service receives the same request twice because of a network error.

    A suitable default value is auto-generated. You should normally not need to pass this option.**

Returns:

See Also:



5377
5378
5379
5380
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5377

def put_query_definition(params = {}, options = {})
  req = build_request(:put_query_definition, params)
  req.send_request(options)
end

#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse

Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region.

Examples:

Request syntax with placeholder values


resp = client.put_resource_policy({
  policy_name: "PolicyName",
  policy_document: "PolicyDocument",
})

Response structure


resp.resource_policy.policy_name #=> String
resp.resource_policy.policy_document #=> String
resp.resource_policy.last_updated_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_name (String)

    Name of the new policy. This parameter is required.

  • :policy_document (String)

    Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string. This parameter is required.

    The following example creates a resource policy enabling the Route 53 service to put DNS query logs in to the specified log group. Replace "logArn" with the ARN of your CloudWatch Logs resource, such as a log group or log stream.

    CloudWatch Logs also supports aws:SourceArn and aws:SourceAccount condition context keys.

    In the example resource policy, you would replace the value of SourceArn with the resource making the call from Route 53 to CloudWatch Logs. You would also replace the value of SourceAccount with the Amazon Web Services account ID making that call.

    { "Version": "2012-10-17", "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", "Effect": "Allow", "Principal": { "Service": [ "route53.amazonaws.com" ] }, "Action": "logs:PutLogEvents", "Resource": "logArn", "Condition": { "ArnLike": { "aws:SourceArn": "myRoute53ResourceArn" }, "StringEquals": { "aws:SourceAccount": "myAwsAccountId" } } } ] }

Returns:

See Also:



5443
5444
5445
5446
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5443

def put_resource_policy(params = {}, options = {})
  req = build_request(:put_resource_policy, params)
  req.send_request(options)
end

#put_retention_policy(params = {}) ⇒ Struct

Sets the retention of the specified log group. With a retention policy, you can configure the number of days for which to retain log events in the specified log group.

CloudWatch Logs doesn't immediately delete log events when they reach their retention setting. It typically takes up to 72 hours after that before log events are deleted, but in rare situations might take longer.

To illustrate, imagine that you change a log group to have a longer retention setting when it contains log events that are past the expiration date, but haven't been deleted. Those log events will take up to 72 hours to be deleted after the new retention date is reached. To make sure that log data is deleted permanently, keep a log group at its lower retention setting until 72 hours after the previous retention period ends. Alternatively, wait to change the retention setting until you confirm that the earlier log events are deleted.

When log events reach their retention setting they are marked for deletion. After they are marked for deletion, they do not add to your archival storage costs anymore, even if they are not actually deleted until later. These log events marked for deletion are also not included when you use an API to retrieve the storedBytes value to see how many bytes a log group is storing.

Examples:

Request syntax with placeholder values


resp = client.put_retention_policy({
  log_group_name: "LogGroupName", # required
  retention_in_days: 1, # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :retention_in_days (required, Integer)

    The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, and 3653.

    To set a log group so that its log events do not expire, use DeleteRetentionPolicy.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



5503
5504
5505
5506
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5503

def put_retention_policy(params = {}, options = {})
  req = build_request(:put_retention_policy, params)
  req.send_request(options)
end

#put_subscription_filter(params = {}) ⇒ Struct

Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

  • An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.

  • A logical destination created with PutDestination that belongs to a different account, for cross-account delivery. We currently support Kinesis Data Streams and Firehose as logical destinations.

  • An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.

  • An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.

Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.

Using regular expressions to create subscription filters is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in subscription filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.

To perform a PutSubscriptionFilter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

Examples:

Request syntax with placeholder values


resp = client.put_subscription_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
  filter_pattern: "FilterPattern", # required
  destination_arn: "DestinationArn", # required
  role_arn: "RoleArn",
  distribution: "Random", # accepts Random, ByLogStream
  apply_on_transformed_logs: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in filterName. To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters.

  • :filter_pattern (required, String)

    A filter pattern for subscribing to a filtered stream of log events.

  • :destination_arn (required, String)

    The ARN of the destination to deliver matching log events to. Currently, the supported destinations are:

    • An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.

    • A logical destination (specified using an ARN) belonging to a different account, for cross-account delivery.

      If you're setting up a cross-account subscription, the destination must have an IAM policy associated with it. The IAM policy must allow the sender to send logs to the destination. For more information, see PutDestinationPolicy.

    • A Kinesis Data Firehose delivery stream belonging to the same account as the subscription filter, for same-account delivery.

    • A Lambda function belonging to the same account as the subscription filter, for same-account delivery.

  • :role_arn (String)

    The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery.

  • :distribution (String)

    The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable when the destination is an Amazon Kinesis data stream.

  • :apply_on_transformed_logs (Boolean)

    This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.

    If the log group uses either a log-group level or account-level transformer, and you specify true, the subscription filter will be applied on the transformed version of the log events instead of the original ingested log events.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



5637
5638
5639
5640
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5637

def put_subscription_filter(params = {}, options = {})
  req = build_request(:put_subscription_filter, params)
  req.send_request(options)
end

#put_transformer(params = {}) ⇒ Struct

Creates or updates a log transformer for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information.

After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers.

You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.

A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use.

Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.

You can create transformers only for the log groups in the Standard log class.

You can also set up a transformer at the account level. For more information, see PutAccountPolicy. If there is both a log-group level transformer created with PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.

Examples:

Request syntax with placeholder values


resp = client.put_transformer({
  log_group_identifier: "LogGroupIdentifier", # required
  transformer_config: [ # required
    {
      add_keys: {
        entries: [ # required
          {
            key: "Key", # required
            value: "AddKeyValue", # required
            overwrite_if_exists: false,
          },
        ],
      },
      copy_value: {
        entries: [ # required
          {
            source: "Source", # required
            target: "Target", # required
            overwrite_if_exists: false,
          },
        ],
      },
      csv: {
        quote_character: "QuoteCharacter",
        delimiter: "Delimiter",
        columns: ["Column"],
        source: "Source",
      },
      date_time_converter: {
        source: "Source", # required
        target: "Target", # required
        target_format: "TargetFormat",
        match_patterns: ["MatchPattern"], # required
        source_timezone: "SourceTimezone",
        target_timezone: "TargetTimezone",
        locale: "Locale",
      },
      delete_keys: {
        with_keys: ["WithKey"], # required
      },
      grok: {
        source: "Source",
        match: "GrokMatch", # required
      },
      list_to_map: {
        source: "Source", # required
        key: "Key", # required
        value_key: "ValueKey",
        target: "Target",
        flatten: false,
        flattened_element: "first", # accepts first, last
      },
      lower_case_string: {
        with_keys: ["WithKey"], # required
      },
      move_keys: {
        entries: [ # required
          {
            source: "Source", # required
            target: "Target", # required
            overwrite_if_exists: false,
          },
        ],
      },
      parse_cloudfront: {
        source: "Source",
      },
      parse_json: {
        source: "Source",
        destination: "DestinationField",
      },
      parse_key_value: {
        source: "Source",
        destination: "DestinationField",
        field_delimiter: "ParserFieldDelimiter",
        key_value_delimiter: "KeyValueDelimiter",
        key_prefix: "KeyPrefix",
        non_match_value: "NonMatchValue",
        overwrite_if_exists: false,
      },
      parse_route_53: {
        source: "Source",
      },
      parse_postgres: {
        source: "Source",
      },
      parse_vpc: {
        source: "Source",
      },
      parse_waf: {
        source: "Source",
      },
      rename_keys: {
        entries: [ # required
          {
            key: "Key", # required
            rename_to: "RenameTo", # required
            overwrite_if_exists: false,
          },
        ],
      },
      split_string: {
        entries: [ # required
          {
            source: "Source", # required
            delimiter: "Delimiter", # required
          },
        ],
      },
      substitute_string: {
        entries: [ # required
          {
            source: "Source", # required
            from: "FromKey", # required
            to: "ToKey", # required
          },
        ],
      },
      trim_string: {
        with_keys: ["WithKey"], # required
      },
      type_converter: {
        entries: [ # required
          {
            key: "Key", # required
            type: "boolean", # required, accepts boolean, integer, double, string
          },
        ],
      },
      upper_case_string: {
        with_keys: ["WithKey"], # required
      },
    },
  ],
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifier (required, String)

    Specify either the name or ARN of the log group to create the transformer for.

  • :transformer_config (required, Array<Types::Processor>)

    This structure contains the configuration of this log transformer. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



5842
5843
5844
5845
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5842

def put_transformer(params = {}, options = {})
  req = build_request(:put_transformer, params)
  req.send_request(options)
end

#start_live_tail(params = {}) ⇒ Types::StartLiveTailResponse

Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time.

The response to this operation is a response stream, over which the server sends live log events and the client receives them.

The following objects are sent over the stream:

  • A single LiveTailSessionStart object is sent at the start of the session.

  • Every second, a LiveTailSessionUpdate object is sent. Each of these objects contains an array of the actual log events.

    If no new log events were ingested in the past second, the LiveTailSessionUpdate object will contain an empty array.

    The array of log events contained in a LiveTailSessionUpdate can include as many as 500 log events. If the number of log events matching the request exceeds 500 per second, the log events are sampled down to 500 log events to be included in each LiveTailSessionUpdate object.

    If your client consumes the log events slower than the server produces them, CloudWatch Logs buffers up to 10 LiveTailSessionUpdate events or 5000 log events, after which it starts dropping the oldest events.

  • A SessionStreamingException object is returned if an unknown error occurs on the server side.

  • A SessionTimeoutException object is returned when the session times out, after it has been kept open for three hours.

You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks.

For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK.

Examples:

EventStream Operation Example


You can process the event once it arrives immediately, or wait until the
full response is complete and iterate through the eventstream enumerator.

To interact with event immediately, you need to register #start_live_tail
with callbacks. Callbacks can be registered for specific events or for all
events, including error events.

Callbacks can be passed into the `:event_stream_handler` option or within a
block statement attached to the #start_live_tail call directly. Hybrid
pattern of both is also supported.

`:event_stream_handler` option takes in either a Proc object or
Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream object.

Usage pattern a): Callbacks with a block attached to #start_live_tail
  Example for registering callbacks for all event types and an error event

  client.start_live_tail( # params input# ) do |stream|
    stream.on_error_event do |event|
      # catch unmodeled error event in the stream
      raise event
      # => Aws::Errors::EventError
      # event.event_type => :error
      # event.error_code => String
      # event.error_message => String
    end

    stream.on_event do |event|
      # process all events arrive
      puts event.event_type
      ...
    end

  end

Usage pattern b): Pass in `:event_stream_handler` for #start_live_tail

  1) Create a Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream object
  Example for registering callbacks with specific events

    handler = Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream.new
    handler.on_session_start_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionStart
    end
    handler.on_session_update_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionUpdate
    end
    handler.on_session_timeout_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionTimeoutException
    end
    handler.on_session_streaming_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionStreamingException
    end

  client.start_live_tail( # params input #, event_stream_handler: handler)

  2) Use a Ruby Proc object
  Example for registering callbacks with specific events

  handler = Proc.new do |stream|
    stream.on_session_start_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionStart
    end
    stream.on_session_update_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionUpdate
    end
    stream.on_session_timeout_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionTimeoutException
    end
    stream.on_session_streaming_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionStreamingException
    end
  end

  client.start_live_tail( # params input #, event_stream_handler: handler)

Usage pattern c): Hybrid pattern of a) and b)

    handler = Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream.new
    handler.on_session_start_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionStart
    end
    handler.on_session_update_event do |event|
      event # => Aws::CloudWatchLogs::Types::sessionUpdate
    end
    handler.on_session_timeout_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionTimeoutException
    end
    handler.on_session_streaming_exception_event do |event|
      event # => Aws::CloudWatchLogs::Types::SessionStreamingException
    end

  client.start_live_tail( # params input #, event_stream_handler: handler) do |stream|
    stream.on_error_event do |event|
      # catch unmodeled error event in the stream
      raise event
      # => Aws::Errors::EventError
      # event.event_type => :error
      # event.error_code => String
      # event.error_message => String
    end
  end

You can also iterate through events after the response complete.

Events are available at resp.response_stream # => Enumerator
For parameter input example, please refer to following request syntax

Request syntax with placeholder values


resp = client.start_live_tail({
  log_group_identifiers: ["LogGroupIdentifier"], # required
  log_stream_names: ["LogStreamName"],
  log_stream_name_prefixes: ["LogStreamName"],
  log_event_filter_pattern: "FilterPattern",
})

Response structure


All events are available at resp.response_stream:
resp.response_stream #=> Enumerator
resp.response_stream.event_types #=> [:session_start, :session_update, :session_timeout_exception, :session_streaming_exception]

For :session_start event available at #on_session_start_event callback and response eventstream enumerator:
event.request_id #=> String
event.session_id #=> String
event.log_group_identifiers #=> Array
event.log_group_identifiers[0] #=> String
event.log_stream_names #=> Array
event.log_stream_names[0] #=> String
event.log_stream_name_prefixes #=> Array
event.log_stream_name_prefixes[0] #=> String
event.log_event_filter_pattern #=> String

For :session_update event available at #on_session_update_event callback and response eventstream enumerator:
event.session_metadata.sampled #=> Boolean
event.session_results #=> Array
event.session_results[0].log_stream_name #=> String
event.session_results[0].log_group_identifier #=> String
event.session_results[0].message #=> String
event.session_results[0].timestamp #=> Integer
event.session_results[0].ingestion_time #=> Integer

For :session_timeout_exception event available at #on_session_timeout_exception_event callback and response eventstream enumerator:
event.message #=> String

For :session_streaming_exception event available at #on_session_streaming_exception_event callback and response eventstream enumerator:
event.message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_identifiers (required, Array<String>)

    An array where each item in the array is a log group to include in the Live Tail session.

    Specify each log group by its ARN.

    If you specify an ARN, the ARN can't end with an asterisk (*).

    You can include up to 10 log groups.

  • :log_stream_names (Array<String>)

    If you specify this parameter, then only log events in the log streams that you specify here are included in the Live Tail session.

    If you specify this field, you can't also specify the logStreamNamePrefixes field.

    You can specify this parameter only if you specify only one log group in logGroupIdentifiers.

  • :log_stream_name_prefixes (Array<String>)

    If you specify this parameter, then only log events in the log streams that have names that start with the prefixes that you specify here are included in the Live Tail session.

    If you specify this field, you can't also specify the logStreamNames field.

    You can specify this parameter only if you specify only one log group in logGroupIdentifiers.

  • :log_event_filter_pattern (String)

    An optional pattern to use to filter the results to include only log events that match the pattern. For example, a filter pattern of error 404 causes only log events that include both error and 404 to be included in the Live Tail stream.

    Regular expression filter patterns are supported.

    For more information about filter pattern syntax, see Filter and Pattern Syntax.

Yields:

  • (event_stream_handler)

Returns:

See Also:



6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6111

def start_live_tail(params = {}, options = {})
  params = params.dup
  event_stream_handler = case handler = params.delete(:event_stream_handler)
    when EventStreams::StartLiveTailResponseStream then handler
    when Proc then EventStreams::StartLiveTailResponseStream.new.tap(&handler)
    when nil then EventStreams::StartLiveTailResponseStream.new
    else
      msg = "expected :event_stream_handler to be a block or "\
            "instance of Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream"\
            ", got `#{handler.inspect}` instead"
      raise ArgumentError, msg
    end

  yield(event_stream_handler) if block_given?

  req = build_request(:start_live_tail, params)

  req.context[:event_stream_handler] = event_stream_handler
  req.handlers.add(Aws::Binary::DecodeHandler, priority: 95)

  req.send_request(options)
end

#start_query(params = {}) ⇒ Types::StartQueryResponse

Starts a query of one or more log groups using CloudWatch Logs Insights. You specify the log groups and time range to query and the query string to use.

For more information, see CloudWatch Logs Insights Query Syntax.

After you run a query using StartQuery, the query results are stored by CloudWatch Logs. You can use GetQueryResults to retrieve the results of a query, using the queryId that StartQuery returns.

To specify the log groups to query, a StartQuery operation must include one of the following:

  • Either exactly one of the following parameters: logGroupName, logGroupNames, or logGroupIdentifiers

  • Or the queryString must include a SOURCE command to select log groups for the query. The SOURCE command can select log groups based on log group name prefix, account ID, and log class.

    For more information about the SOURCE command, see SOURCE.

If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method.

Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see CloudWatch cross-account observability. For a cross-account StartQuery operation, the query definition must be defined in the monitoring account.

You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.

Examples:

Request syntax with placeholder values


resp = client.start_query({
  query_language: "CWLI", # accepts CWLI, SQL, PPL
  log_group_name: "LogGroupName",
  log_group_names: ["LogGroupName"],
  log_group_identifiers: ["LogGroupIdentifier"],
  start_time: 1, # required
  end_time: 1, # required
  query_string: "QueryString", # required
  limit: 1,
})

Response structure


resp.query_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_language (String)

    Specify the query language to use for this query. The options are Logs Insights QL, OpenSearch PPL, and OpenSearch SQL. For more information about the query languages that CloudWatch Logs supports, see Supported query languages.

  • :log_group_name (String)

    The log group on which to perform the query.

    A StartQuery operation must include exactly one of the following parameters: logGroupName, logGroupNames, or logGroupIdentifiers. The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the querystring instead of here.

  • :log_group_names (Array<String>)

    The list of log groups to be queried. You can include up to 50 log groups.

    A StartQuery operation must include exactly one of the following parameters: logGroupName, logGroupNames, or logGroupIdentifiers. The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the querystring instead of here.

  • :log_group_identifiers (Array<String>)

    The list of log groups to query. You can include up to 50 log groups.

    You can specify them by the log group name or ARN. If a log group that you're querying is in a source account and you're using a monitoring account, you must specify the ARN of the log group here. The query definition must also be defined in the monitoring account.

    If you specify an ARN, use the format arn:aws:logs:region:account-id:log-group:log_group_name Don't include an * at the end.

    A StartQuery operation must include exactly one of the following parameters: logGroupName, logGroupNames, or logGroupIdentifiers. The exception is queries using the OpenSearch Service SQL query language, where you specify the log group names inside the querystring instead of here.

  • :start_time (required, Integer)

    The beginning of the time range to query. The range is inclusive, so the specified start time is included in the query. Specified as epoch time, the number of seconds since January 1, 1970, 00:00:00 UTC.

  • :end_time (required, Integer)

    The end of the time range to query. The range is inclusive, so the specified end time is included in the query. Specified as epoch time, the number of seconds since January 1, 1970, 00:00:00 UTC.

  • :query_string (required, String)

    The query string to use. For more information, see CloudWatch Logs Insights Query Syntax.

  • :limit (Integer)

    The maximum number of log events to return in the query. If the query string uses the fields command, only the specified fields and their values are returned. The default is 10,000.

Returns:

See Also:



6284
6285
6286
6287
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6284

def start_query(params = {}, options = {})
  req = build_request(:start_query, params)
  req.send_request(options)
end

#stop_query(params = {}) ⇒ Types::StopQueryResponse

Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.

Examples:

Request syntax with placeholder values


resp = client.stop_query({
  query_id: "QueryId", # required
})

Response structure


resp.success #=> Boolean

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :query_id (required, String)

    The ID number of the query to stop. To find this ID number, use DescribeQueries.

Returns:

See Also:



6315
6316
6317
6318
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6315

def stop_query(params = {}, options = {})
  req = build_request(:stop_query, params)
  req.send_request(options)
end

#tag_log_group(params = {}) ⇒ Struct

The TagLogGroup operation is on the path to deprecation. We recommend that you use TagResource instead.

Adds or updates the specified tags for the specified log group.

To list the tags for a log group, use ListTagsForResource. To remove tags, use UntagResource.

For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.

CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified tags to log groups using the aws:Resource/key-name or aws:TagKeys condition keys. For more information about using tags to control access, see Controlling access to Amazon Web Services resources using tags.

Examples:

Request syntax with placeholder values


resp = client.tag_log_group({
  log_group_name: "LogGroupName", # required
  tags: { # required
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :tags (required, Hash<String,String>)

    The key-value pairs to use for the tags.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6366
6367
6368
6369
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6366

def tag_log_group(params = {}, options = {})
  req = build_request(:tag_log_group, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations.

Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.

Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.

You can use the TagResource action with a resource that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag.

You can associate as many as 50 tags with a CloudWatch Logs resource.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "AmazonResourceName", # required
  tags: { # required
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The ARN of the resource that you're adding tags to.

    The ARN format of a log group is arn:aws:logs:Region:account-id:log-group:log-group-name

    The ARN format of a destination is arn:aws:logs:Region:account-id:destination:destination-name

    For more information about ARN format, see CloudWatch Logs resources and operations.

  • :tags (required, Hash<String,String>)

    The list of key-value pairs to associate with the resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6424
6425
6426
6427
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6424

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#test_metric_filter(params = {}) ⇒ Types::TestMetricFilterResponse

Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.

Examples:

Request syntax with placeholder values


resp = client.test_metric_filter({
  filter_pattern: "FilterPattern", # required
  log_event_messages: ["EventMessage"], # required
})

Response structure


resp.matches #=> Array
resp.matches[0].event_number #=> Integer
resp.matches[0].event_message #=> String
resp.matches[0].extracted_values #=> Hash
resp.matches[0].extracted_values["Token"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_pattern (required, String)

    A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.

  • :log_event_messages (required, Array<String>)

    The log event messages to test.

Returns:

See Also:



6465
6466
6467
6468
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6465

def test_metric_filter(params = {}, options = {})
  req = build_request(:test_metric_filter, params)
  req.send_request(options)
end

#test_transformer(params = {}) ⇒ Types::TestTransformerResponse

Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions.

Examples:

Request syntax with placeholder values


resp = client.test_transformer({
  transformer_config: [ # required
    {
      add_keys: {
        entries: [ # required
          {
            key: "Key", # required
            value: "AddKeyValue", # required
            overwrite_if_exists: false,
          },
        ],
      },
      copy_value: {
        entries: [ # required
          {
            source: "Source", # required
            target: "Target", # required
            overwrite_if_exists: false,
          },
        ],
      },
      csv: {
        quote_character: "QuoteCharacter",
        delimiter: "Delimiter",
        columns: ["Column"],
        source: "Source",
      },
      date_time_converter: {
        source: "Source", # required
        target: "Target", # required
        target_format: "TargetFormat",
        match_patterns: ["MatchPattern"], # required
        source_timezone: "SourceTimezone",
        target_timezone: "TargetTimezone",
        locale: "Locale",
      },
      delete_keys: {
        with_keys: ["WithKey"], # required
      },
      grok: {
        source: "Source",
        match: "GrokMatch", # required
      },
      list_to_map: {
        source: "Source", # required
        key: "Key", # required
        value_key: "ValueKey",
        target: "Target",
        flatten: false,
        flattened_element: "first", # accepts first, last
      },
      lower_case_string: {
        with_keys: ["WithKey"], # required
      },
      move_keys: {
        entries: [ # required
          {
            source: "Source", # required
            target: "Target", # required
            overwrite_if_exists: false,
          },
        ],
      },
      parse_cloudfront: {
        source: "Source",
      },
      parse_json: {
        source: "Source",
        destination: "DestinationField",
      },
      parse_key_value: {
        source: "Source",
        destination: "DestinationField",
        field_delimiter: "ParserFieldDelimiter",
        key_value_delimiter: "KeyValueDelimiter",
        key_prefix: "KeyPrefix",
        non_match_value: "NonMatchValue",
        overwrite_if_exists: false,
      },
      parse_route_53: {
        source: "Source",
      },
      parse_postgres: {
        source: "Source",
      },
      parse_vpc: {
        source: "Source",
      },
      parse_waf: {
        source: "Source",
      },
      rename_keys: {
        entries: [ # required
          {
            key: "Key", # required
            rename_to: "RenameTo", # required
            overwrite_if_exists: false,
          },
        ],
      },
      split_string: {
        entries: [ # required
          {
            source: "Source", # required
            delimiter: "Delimiter", # required
          },
        ],
      },
      substitute_string: {
        entries: [ # required
          {
            source: "Source", # required
            from: "FromKey", # required
            to: "ToKey", # required
          },
        ],
      },
      trim_string: {
        with_keys: ["WithKey"], # required
      },
      type_converter: {
        entries: [ # required
          {
            key: "Key", # required
            type: "boolean", # required, accepts boolean, integer, double, string
          },
        ],
      },
      upper_case_string: {
        with_keys: ["WithKey"], # required
      },
    },
  ],
  log_event_messages: ["EventMessage"], # required
})

Response structure


resp.transformed_logs #=> Array
resp.transformed_logs[0].event_number #=> Integer
resp.transformed_logs[0].event_message #=> String
resp.transformed_logs[0].transformed_event_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transformer_config (required, Array<Types::Processor>)

    This structure contains the configuration of this log transformer that you want to test. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested.

  • :log_event_messages (required, Array<String>)

    An array of the raw log events that you want to use to test this transformer.

Returns:

See Also:



6638
6639
6640
6641
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6638

def test_transformer(params = {}, options = {})
  req = build_request(:test_transformer, params)
  req.send_request(options)
end

#untag_log_group(params = {}) ⇒ Struct

The UntagLogGroup operation is on the path to deprecation. We recommend that you use UntagResource instead.

Removes the specified tags from the specified log group.

To list the tags for a log group, use ListTagsForResource. To add tags, use TagResource.

CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified tags to log groups using the aws:Resource/key-name or aws:TagKeys condition keys.

Examples:

Request syntax with placeholder values


resp = client.untag_log_group({
  log_group_name: "LogGroupName", # required
  tags: ["TagKey"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :log_group_name (required, String)

    The name of the log group.

  • :tags (required, Array<String>)

    The tag keys. The corresponding tags are removed from the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6680
6681
6682
6683
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6680

def untag_log_group(params = {}, options = {})
  req = build_request(:untag_log_group, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes one or more tags from the specified resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "AmazonResourceName", # required
  tag_keys: ["TagKey"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The ARN of the CloudWatch Logs resource that you're removing tags from.

    The ARN format of a log group is arn:aws:logs:Region:account-id:log-group:log-group-name

    The ARN format of a destination is arn:aws:logs:Region:account-id:destination:destination-name

    For more information about ARN format, see CloudWatch Logs resources and operations.

  • :tag_keys (required, Array<String>)

    The list of tag keys to remove from the resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6720
6721
6722
6723
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6720

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_anomaly(params = {}) ⇒ Struct

Use this operation to suppress anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won't report new occurrences of that anomaly and won't update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won't report any anomalies related to that pattern.

You must specify either anomalyId or patternId, but you can't specify both parameters in the same operation.

If you have previously used this operation to suppress detection of a pattern or anomaly, you can use it again to cause CloudWatch Logs to end the suppression. To do this, use this operation and specify the anomaly or pattern to stop suppressing, and omit the suppressionType and suppressionPeriod parameters.

Examples:

Request syntax with placeholder values


resp = client.update_anomaly({
  anomaly_id: "AnomalyId",
  pattern_id: "PatternId",
  anomaly_detector_arn: "AnomalyDetectorArn", # required
  suppression_type: "LIMITED", # accepts LIMITED, INFINITE
  suppression_period: {
    value: 1,
    suppression_unit: "SECONDS", # accepts SECONDS, MINUTES, HOURS
  },
  baseline: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :anomaly_id (String)

    If you are suppressing or unsuppressing an anomaly, specify its unique ID here. You can find anomaly IDs by using the ListAnomalies operation.

  • :pattern_id (String)

    If you are suppressing or unsuppressing an pattern, specify its unique ID here. You can find pattern IDs by using the ListAnomalies operation.

  • :anomaly_detector_arn (required, String)

    The ARN of the anomaly detector that this operation is to act on.

  • :suppression_type (String)

    Use this to specify whether the suppression to be temporary or infinite. If you specify LIMITED, you must also specify a suppressionPeriod. If you specify INFINITE, any value for suppressionPeriod is ignored.

  • :suppression_period (Types::SuppressionPeriod)

    If you are temporarily suppressing an anomaly or pattern, use this structure to specify how long the suppression is to last.

  • :baseline (Boolean)

    Set this to true to prevent CloudWatch Logs from displaying this behavior as an anomaly in the future. The behavior is then treated as baseline behavior. However, if similar but more severe occurrences of this behavior occur in the future, those will still be reported as anomalies.

    The default is false

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6800
6801
6802
6803
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6800

def update_anomaly(params = {}, options = {})
  req = build_request(:update_anomaly, params)
  req.send_request(options)
end

#update_delivery_configuration(params = {}) ⇒ Struct

Use this operation to update the configuration of a delivery to change either the S3 path pattern or the format of the delivered logs. You can't use this operation to change the source or destination of the delivery.

Examples:

Request syntax with placeholder values


resp = client.update_delivery_configuration({
  id: "DeliveryId", # required
  record_fields: ["FieldHeader"],
  field_delimiter: "FieldDelimiter",
  s3_delivery_configuration: {
    suffix_path: "DeliverySuffixPath",
    enable_hive_compatible_path: false,
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the delivery to be updated by this request.

  • :record_fields (Array<String>)

    The list of record fields to be delivered to the destination, in order. If the delivery's log source has mandatory fields, they must be included in this list.

  • :field_delimiter (String)

    The field delimiter to use between record fields when the final output format of a delivery is in Plain, W3C, or Raw format.

  • :s3_delivery_configuration (Types::S3DeliveryConfiguration)

    This structure contains parameters that are valid only when the delivery's delivery destination is an S3 bucket.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6848
6849
6850
6851
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6848

def update_delivery_configuration(params = {}, options = {})
  req = build_request(:update_delivery_configuration, params)
  req.send_request(options)
end

#update_log_anomaly_detector(params = {}) ⇒ Struct

Updates an existing log anomaly detector.

Examples:

Request syntax with placeholder values


resp = client.update_log_anomaly_detector({
  anomaly_detector_arn: "AnomalyDetectorArn", # required
  evaluation_frequency: "ONE_MIN", # accepts ONE_MIN, FIVE_MIN, TEN_MIN, FIFTEEN_MIN, THIRTY_MIN, ONE_HOUR
  filter_pattern: "FilterPattern",
  anomaly_visibility_time: 1,
  enabled: false, # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :anomaly_detector_arn (required, String)

    The ARN of the anomaly detector that you want to update.

  • :evaluation_frequency (String)

    Specifies how often the anomaly detector runs and look for anomalies. Set this value according to the frequency that the log group receives new logs. For example, if the log group receives new log events every 10 minutes, then setting evaluationFrequency to FIFTEEN_MIN might be appropriate.

  • :filter_pattern (String)

    A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.

  • :anomaly_visibility_time (Integer)

    The number of days to use as the life cycle of anomalies. After this time, anomalies are automatically baselined and the anomaly detector model will treat new occurrences of similar event as normal. Therefore, if you do not correct the cause of an anomaly during this time, it will be considered normal going forward and will not be detected.

  • :enabled (required, Boolean)

    Use this parameter to pause or restart the anomaly detector.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



6898
6899
6900
6901
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6898

def update_log_anomaly_detector(params = {}, options = {})
  req = build_request(:update_log_anomaly_detector, params)
  req.send_request(options)
end