Processors that you can use
This section contains information about each processor that you can use in a log event transformer. The processors can be categorized into parsers, string mutators, JSON mutators, and date processors.
Contents
Configurable parser-type processors
parseJSON
The parseJSON processor parses JSON log events and inserts extracted JSON key-value pairs under the destination. If you don't specify a destination, the processor places the key-value pair under the root node.
The original @message
content is not changed, the new keys are added to the message.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example,
store.book |
No |
|
Maximum length: 128 Maximum nested key depth: 3 |
destination |
The destination field of the parsed JSON |
No |
|
Maximum length: 128 Maximum nested key depth: 3 |
Example
Suppose an ingested log event looks like this:
{ "outer_key": { "inner_key": "inner_value" } }
Then if we have this parseJSON processor:
[ "parseJSON": { "destination": "new_key" } ]
The transformed log event would be the following.
{ "new_key": { "outer_key": { "inner_key": "inner_value" } } }
grok
Use the grok processor to use pattern matching to parse and structure unstructured data. This processor can also extract fields from log messages.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
Path to the field in the log event to apply grok matching to |
No |
|
Maximum length: 128 Maximum nested key depth: 3 |
match |
The grok pattern to match against the log event. The supported grok patterns are listed at the end of this section. |
Yes |
Maximum length: 128 Maximum of 5 grok patterns. grok patterns won't support type conversions. For common log format patterns (APACHE_ACCESS_LOG, NGINX_ACCESS_LOG, SYSLOG5424,) only GREEDYDATA or DATA patterns will be supported at the end. |
Example
Suppose an ingested log event looks like this:
{ "outer_key": { "inner_key": "inner_value" } }
To extract the inner_value
from the above log event, you can use a transformer with a combination of the parseJSON
and grok
processors.
[ "parseJSON": {}, "grok": { "source": "outer_key.inner_key", "match": "%{NOTSPACE:new_key}" } } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "inner_value" } "new_key": "inner_value" }
Supported grok patterns
The following patterns are supported by the grok
processor.
## Grok Patterns ## Syntax: <patternName> <usageLimit> <pattern> ## ## - patternName: Name of the pattern. ## ## - usageLimit: Number of times the pattern can be used in a single expression. If this is 0, there is no restriction on the number of uses. ## ######################################################################################################################## #### Basic Grok patterns #### USERNAME 0 USER 0 INT 0 BASE10NUM 0 NUMBER 0 BASE16NUM 0 BASE16FLOAT 0 POSINT 0 NONNEGINT WORD 0 NOTSPACE 0 SPACE 0 DATA 0 GREEDYDATA 0 .* QUOTEDSTRING 0 UUID 0 URN 0 ARN 0 ## Networking ## MAC 0 CISCOMAC 0 WINDOWSMAC 0 COMMONMAC 0 IPV6 0 IPV4 0 IP 0 HOSTNAME 0 HOST 0 IPORHOST 0 HOSTPORT 0 ## Paths ## PATH 0 UNIXPATH 0 WINPATH 0 TTY 0 URIPROTO 0 URIHOST 0 URIPATH 0 URIPARAM 0 URIPATHPARAM 0 URI 0 ## Log Levels ## LOGLEVEL 0 ## Dates ## # Months: January, Feb, 3, 03, 12, December MONTH 0 MONTHNUM 0 MONTHNUM2 0 MONTHDAY 0 YEAR 0 # Days: Monday, Tue, Thu, etc... DAY 0 # Time: HH:MM:SS TIME 0 HOUR 0 MINUTE 0 # '60' is a leap second in most time standards and thus is valid. SECOND 0 # datestamp is YYYY/MM/DD-HH:MM:SS.UUUU DATE_US 0 DATE_EU 0 ISO8601_TIMEZONE 0 ISO8601_SECOND 0 TIMESTAMP_ISO8601 0 DATE 0 DATESTAMP 0 TZ 0 DATESTAMP_RFC822 0 DATESTAMP_RFC2822 0 DATESTAMP_OTHER 0 DATESTAMP_EVENTLOG 0 # Syslog Dates: Month Day HH:MM:SS SYSLOGTIMESTAMP 0 PROG 0 SYSLOGPROG 0 SYSLOGHOST 0 SYSLOGFACILITY 0 HTTPDATE 0 ######################################################################################################################## #### Grok patterns with limit of 1 usage per Grok expression #### ## Common Log Formats ## APACHE_ACCESS_LOG 1 NGINX_ACCESS_LOG 1 SYSLOG5424 1
csv
The csv processor parses comma-separated values (CSV) from the log events into columns.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
Path to the field in the log event that will be parsed |
No |
|
Maximum length: 128 Maximum nested key depth: 3 |
delimiter |
The character used to separate each column in the original comma-separated value log event |
No |
|
Maximum length: 1 |
quoteCharacter |
Character used as a text qualifier for a single column of data |
No |
|
Maximum length: 1 |
columns |
List of names to use for the columns in the transformed log event. |
No |
|
Maximum CSV columns: 100 Maximum length: 128 Maximum nested key depth: 3 |
Example
Suppose part of an ingested log event looks like this:
'Akua Mansa',28,'New York, USA'
Suppose we use only the csv processor:
[ "csv": { "delimiter": ":", "quoteCharacter": ":"" } ]
The transformed log event would be the following.
{ "column_1": "Akua Mansa", "column_2": "28", "column_3": "New York, USA" }
parseKeyValue
Use the parseKeyValue processor to parse a specified field into key-value pairs. You can customize the processor to parse field information with the following options.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
Path to the field in the log event that will be parsed |
No |
|
Maximum length: 128 Maximum nested key depth: 3 |
destination |
The destination field to put the extracted key-value pairs into |
No |
Maximum length: 128 |
|
fieldDelimiter |
The field delimiter string that is used between key-value pairs in the original log events |
No |
|
Maximum length: 128 |
keyValueDelimiter |
The delimiter string to use between the key and value in each pair in the transformed log event |
No |
|
Maximum length: 128 |
nonMatchValue |
A value to insert into the value field in the result, when a key-value pair is not successfully split. |
No |
Maximum length: 128 |
|
keyPrefix |
If you want to add a prefix toall transformed keys, specify it here. |
No |
Maximum length: 128 |
|
overwriteIfExists |
Whether to overwrite the value if the destination key already exists |
No |
|
Example
Take the following example log event:
key1:value1!key2:value2!key3:value3!key4
Suppose we use the following processor configuration:
[ "parseKeyValue": { "destination": "new_key", "fieldDelimiter": "!", "keyValueDelimiter": ":", "nonMatchValue": "defaultValue", "keyPrefix": "parsed_" } ]
The transformed log event would be the following.
{ "new_key": { "parsed_key1": "value1", "parsed_key2": "value2", "parsed_key3": "value3", "parsed_key4": "defaultValue" } }
Built-in processors for AWS vended logs
parseWAF
Use this processor to parse AWS WAF vended logs,
It takes
the contents of httpRequest.headers
and creates JSON keys from each header name, with the corresponding value.
It also does the same for labels
. These transformations can make querying AWS WAF logs much easier.
For more information about AWS WAF log format, see
Log examples for web ACL traffic.
This processor accepts only @message
as the input.
Important
If you use this processor, it must be the first processor in your transformer.
Example
Take the following example log event:
{ "timestamp": 1576280412771, "formatVersion": 1, "webaclId": "arn:aws:wafv2:ap-southeast-2:111122223333:regional/webacl/STMTest/1EXAMPLE-2ARN-3ARN-4ARN-123456EXAMPLE", "terminatingRuleId": "STMTest_SQLi_XSS", "terminatingRuleType": "REGULAR", "action": "BLOCK", "terminatingRuleMatchDetails": [ { "conditionType": "SQL_INJECTION", "sensitivityLevel": "HIGH", "location": "HEADER", "matchedData": ["10", "AND", "1"] } ], "httpSourceName": "-", "httpSourceId": "-", "ruleGroupList": [], "rateBasedRuleList": [], "nonTerminatingMatchingRules": [], "httpRequest": { "clientIp": "1.1.1.1", "country": "AU", "headers": [ { "name": "Host", "value": "localhost:1989" }, { "name": "User-Agent", "value": "curl/7.61.1" }, { "name": "Accept", "value": "*/*" }, { "name": "x-stm-test", "value": "10 AND 1=1" } ], "uri": "/myUri", "args": "", "httpVersion": "HTTP/1.1", "httpMethod": "GET", "requestId": "rid" }, "labels": [{ "name": "value" }] }
The processor configuration is this:
[ "parseWAF": {} ]
The transformed log event would be the following.
{ "httpRequest": { "headers": { "Host": "localhost:1989", "User-Agent": "curl/7.61.1", "Accept": "*/*", "x-stm-test": "10 AND 1=1" }, "clientIp": "1.1.1.1", "country": "AU", "uri": "/myUri", "args": "", "httpVersion": "HTTP/1.1", "httpMethod": "GET", "requestId": "rid" }, "labels": { "name": "value" }, "timestamp": 1576280412771, "formatVersion": 1, "webaclId": "arn:aws:wafv2:ap-southeast-2:111122223333:regional/webacl/STMTest/1EXAMPLE-2ARN-3ARN-4ARN-123456EXAMPLE", "terminatingRuleId": "STMTest_SQLi_XSS", "terminatingRuleType": "REGULAR", "action": "BLOCK", "terminatingRuleMatchDetails": [ { "conditionType": "SQL_INJECTION", "sensitivityLevel": "HIGH", "location": "HEADER", "matchedData": ["10", "AND", "1"] } ], "httpSourceName": "-", "httpSourceId": "-", "ruleGroupList": [], "rateBasedRuleList": [], "nonTerminatingMatchingRules": [] }
parsePostgres
Use this processor to parse Amazon RDS for PostgreSQL vended logs, extract fields, and convert them to JSON format. For more information about RDS for PostgreSQL log format, see RDS for PostgreSQL database log files.
This processor accepts only @message
as the input.
Important
If you use this processor, it must be the first processor in your transformer.
Example
Take the following example log event:
2019-03-10 03:54:59 UTC:10.0.0.123(52834):postgres@logtestdb:[20175]:ERROR: column "wrong_column_name" does not exist at character 8
The processor configuration is this:
[ "parsePostgres": {} ]
The transformed log event would be the following.
{ "logTime": "2019-03-10 03:54:59 UTC", "srcIp": "10.0.0.123(52834)", "userName": "postgres", "dbName": "logtestdb", "processId": "20175", "logLevel": "ERROR" }
parseCloudfront
Use this processor to parse Amazon CloudFront vended logs, extract fields, and convert them into JSON format. Encoded field values are decoded. Values that are integers and doubles are treated as such. For more information about Amazon CloudFront log format, see Configure and use standard logs (access logs).
This processor accepts only @message
as the input.
Important
If you use this processor, it must be the first processor in your transformer.
Example
Take the following example log event:
2019-12-04 21:02:31 LAX1 392 192.0.2.24 GET d111111abcdef8.cloudfront.net /index.html 200 - Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36 - - Hit SOX4xwn4XV6Q4rgb7XiVGOHms_BGlTAC4KyHmureZmBNrjGdRLiNIQ== d111111abcdef8.cloudfront.net https 23 0.001 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Hit HTTP/2.0 - - 11040 0.001 Hit text/html 78 - -
The processor configuration is this:
[ "parseCloudfront": {} ]
The transformed log event would be the following.
{ "date": "2019-12-04", "time": "21:02:31", "x-edge-location": "LAX1", "sc-bytes": 392, "c-ip": "192.0.2.24", "cs-method": "GET", "cs(Host)": "d111111abcdef8.cloudfront.net", "cs-uri-stem": "/index.html", "sc-status": 200, "cs(Referer)": "-", "cs(User-Agent)": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36", "cs-uri-query": "-", "cs(Cookie)": "-", "x-edge-result-type": "Hit", "x-edge-request-id": "SOX4xwn4XV6Q4rgb7XiVGOHms_BGlTAC4KyHmureZmBNrjGdRLiNIQ==", "x-host-header": "d111111abcdef8.cloudfront.net", "cs-protocol": "https", "cs-bytes": 23, "time-taken": 0.001, "x-forwarded-for": "-", "ssl-protocol": "TLSv1.2", "ssl-cipher": "ECDHE-RSA-AES128-GCM-SHA256", "x-edge-response-result-type": "Hit", "cs-protocol-version": "HTTP/2.0", "fle-status": "-", "fle-encrypted-fields": "-", "c-port": 11040, "time-to-first-byte": 0.001, "x-edge-detailed-result-type": "Hit", "sc-content-type": "text/html", "sc-content-len": 78, "sc-range-start": "-", "sc-range-end": "-" }
parseRoute53
Use this processor to parse Amazon Route 53 Public Data Plane vended logs, extract fields, and convert them into JSON format. Encoded field values are decoded.
This processor accepts only @message
as the input.
Important
If you use this processor, it must be the first processor in your transformer.
Example
Take the following example log event:
1.0 2017-12-13T08:15:50.235Z Z123412341234 example.com AAAA NOERROR TCP IAD12 192.0.2.0 198.51.100.0/24
The processor configuration is this:
[ "parseRoute53": {} ]
The transformed log event would be the following.
{ "version": 1.0, "queryTimestamp": "2017-12-13T08:15:50.235Z", "hostZoneId": "Z123412341234", "queryName": "example.com", "queryType": "AAAA", "responseCode": "NOERROR", "protocol": "TCP", "edgeLocation": "IAD12", "resolverIp": "192.0.2.0", "ednsClientSubnet": "198.51.100.0/24" }
parseVPC
Use this processor to parse Amazon Route 53 Public Data Plane VPC vended logs, extract fields, and convert them into JSON format. Encoded field values are decoded.
This processor accepts only @message
as the input.
Important
If you use this processor, it must be the first processor in your transformer.
Example
Take the following example log event:
2 123456789010 eni-abc123de 192.0.2.0 192.0.2.24 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK
The processor configuration is this:
[ "parseVPC": {} ]
The transformed log event would be the following.
{ "version": 2, "accountId": "123456789010", "interfaceId": "eni-abc123de", "srcAddr": "192.0.2.0", "dstAddr": "192.0.2.24", "srcPort": 20641, "dstPort": 22, "protocol": 6, "packets": 20, "bytes": 4249, "start": 1418530010, "end": 1418530070, "action": "ACCEPT", "logStatus": "OK" }
String mutate processors
lowerCaseString
The lowerCaseString
processor converts a string to its lowercase version.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
withKeys |
A list of keys to convert to lowercase |
Yes |
Maximum entries: 10 |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "INNER_VALUE" } }
The transformer configuration is this, using lowerCaseString
with parseJSON
:
[ "parseJSON": {}, "lowerCaseString": { "withKeys":["outer_key.inner_key"] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "inner_value" } }
upperCaseString
The upperCaseString
processor converts a string to its uppercase version.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
withKeys |
A list of keys to convert to uppercase |
Yes |
Maximum entries: 10 |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using upperCaseString
with parseJSON
:
[ "parseJSON": {}, "upperCaseString": { "withKeys":["outer_key.inner_key"] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "INNER_VALUE" } }
splitString
The splitString
processor splits a field into an array using a delimiting character.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array must contain source and delimiter fields. |
Yes |
Maximum entries: 100 |
|
source |
The key to split |
Yes |
Maximum length: 128 |
|
delimiter |
The separator characters responsible for the split |
Yes |
Maximum length: 1 |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using splitString
with parseJSON
:
[ "parseJSON": {}, "splitString": { "entries": [ { "source": "outer_key.inner_key", "delimiter": "_" } ] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": [ "inner", "value" ] } }
substituteString
The substituteString
processor matches a key’s value against a
regular expression and replaces all matches with a replacement string.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array must contain source , from , and to fields. |
Yes |
Maximum entries: 10 |
|
source |
The key to modify |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
from |
The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \\
when using double quotes and with \ when using single quotes. For more information,
see
Class Pattern |
Yes |
Maximum length: 128 |
|
to |
The string to be substituted for each match of from |
Yes |
Maximum length: 128 |
Example
Take the following example log event:
{ "outer_key": { "inner_key1": "[]", "inner_key2": "123-345-567" } }
The transformer configuration is this, using substituteString
with parseJSON
:
[ "parseJSON": {}, "substituteString": { "entries": [ { "source": "outer_key.inner_key1", "from": "\\[\\]", "to": "value1" }, { "source": "outer_key.inner_key2", "from": "[0-9]{3}-[0-9]{3}-[0-9]{3}", "to": "xxx-xxx-xxx" } ] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key1": "value1", "inner_key2": "xxx-xxx-xxx" } }
trimString
The trimString
processor removes whitespace from the beginning and end of a key.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
withKeys |
A list of keys to trim |
Yes |
Maximum entries: 10 |
Example
Take the following example log event:
{ "outer_key": { "inner_key": " inner_value " } }
The transformer configuration is this, using trimString
with parseJSON
:
[ "parseJSON": {}, "trimString": { "withKeys":["outer_key.inner_key"] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "inner_value" } }
JSON mutate processors
addKeys
Use the addKeys
processor to add new key-value pairs to the log event.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array can contain key , value , and overwriteIfExists fields. |
Yes |
Maximum entries: 5 |
|
key |
The key of the new entry to be added |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
value |
The value of the new entry to be added |
Yes |
Maximum length: 256 |
|
overwriteIfExists |
If you set this to true , the existing value is overwritten if key already exists in the event. The default value is false .
|
No |
false |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using addKeys
with parseJSON
:
[ "parseJSON": {}, "addKeys": { "entries": [ { "source": "outer_key.new_key", "value": "new_value" } ] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "inner_value", "new_key": "new_value" } }
deleteKeys
Use the deleteKeys
processor to delete fields from a log event. These fields can include key-value pairs.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
withKeys |
The list of keys to delete. |
Yes |
Maximum entries: 5 |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using deleteKeys
with parseJSON
:
[ "parseJSON": {}, "deleteKeys": { "withKeys":["outer_key.inner_key"] } ]
The transformed log event would be the following.
{ "outer_key": {} }
moveKeys
Use the addKeys
processor to move a key from one field to another.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array can contain source , target , and overwriteIfExists fields. |
Yes |
Maximum entries: 5 |
|
source |
The key to move |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
target |
The key to move to |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
overwriteIfExists |
If you set this to true , the existing value is overwritten if key already exists in the event. The default value is false .
|
false |
Example
Take the following example log event:
{ "outer_key1": { "inner_key1": "inner_value1" }, "outer_key2": { "inner_key2": "inner_value2" } }
The transformer configuration is this, using moveKeys
with parseJSON
:
[ "parseJSON": {}, "moveKeys": { "entries": [ { "source": "outer_key1.inner_key1", "target": "outer_key2" } ] } ]
The transformed log event would be the following.
{ "outer_key1": {}, "outer_key2": { "inner_key2": "inner_value2", "inner_key1": "inner_value1" } }
renameKeys
Use the renameKeys
processor to rename keys in a log event.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array can contain key , target , and overwriteIfExists fields. |
Yes |
Maximum entries: 5 |
|
key |
The key to rename |
Yes |
Maximum length: 128 |
|
target |
The new key name |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
overwriteIfExists |
If you set this to true , the existing value is overwritten if key already exists in the event. The default value is false .
|
false |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using renameKeys
with parseJSON
:
[ "parseJSON": {}, "renameKeys": { "entries": [ { "key": "outer_key", "target": "new_key" } ] } ]
The transformed log event would be the following.
{ "new_key": { "inner_key": "inner_value" } }
copyValue
Use the copyValue
processor to copy values within a log event.
You can also use this processor to add metadata to log events, by copying the values of the following metadata keys into the log events:
@logGroupName
, @logGroupStream
, @accountId
, @regionName
. This is illustrated in the following example.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array can contain source , target , and overwriteIfExists fields. |
Yes |
Maximum entries: 5 |
|
source |
The key to copy |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
target |
The key to copy the value to |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
overwriteIfExists |
If you set this to true , the existing value is overwritten if key already exists in the event. The default value is false .
|
false |
Example
Take the following example log event:
{ "outer_key": { "inner_key": "inner_value" } }
The transformer configuration is this, using copyValue
with parseJSON
:
[ "parseJSON": {}, "copyValue": { "entries": [ { "source": "outer_key.new_key", "target": "new_key" }, { "source": "@logGroupName", "target": "log_group_name" }, { "source": "@logGroupStream", "target": "log_group_stream" }, { "source": "@accountId", "target": "account_id" }, { "source": "@regionName", "target": "region_name" } ] } ]
The transformed log event would be the following.
{ "outer_key": { "inner_key": "inner_value" }, "new_key": "inner_value", "log_group_name": "myLogGroupName", "log_group_stream": "myLogStreamName", "account_id": "012345678912", "region_name": "us-east-1" }
listToMap
The listToMap
processor takes a list of objects that contain key fields, and converts them into a map of target keys.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
The key in the ProcessingEvent with a list of objects that will be converted to a map |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
key |
The key of the fields to be extracted as keys in the generated map |
Yes |
Maximum length: 128 |
|
valueKey |
If this is specified, the values that you specify in this parameter will be extracted from the source objects and put into the values of the generated map.
Otherwise, original objects in the source list will be put into the values of the generated map. |
No |
Maximum length: 128 |
|
target |
The key of the field that will hold the generated map |
No |
Root node |
Maximum length: 128 Maximum nested key depth: 3 |
flatten |
A Boolean value to indicate whether the list will be flattened into single items or if the values in the generated map will be lists. By default the values for the matching keys will be represented in an array. Set |
No |
false |
|
flattenedElement |
If you set flatten to true , use flattenedElement to specify
which element, first or last , to keep.
|
Required when |
Value can only be first or last |
Example
Take the following example log event:
{ "outer_key": [ { "inner_key": "a", "inner_value": "val-a" }, { "inner_key": "b", "inner_value": "val-b1" }, { "inner_key": "b", "inner_value": "val-b2" }, { "inner_key": "c", "inner_value": "val-c" } ] }
Transformer for use case 1: flatten
is false
[ "parseJSON": {}, "listToMap": { "source": "outer_key" "key": "inner_key", "valueKey": "inner_value", "flatten": false } ]
The transformed log event would be the following.
{ "outer_key": [ { "inner_key": "a", "inner_value": "val-a" }, { "inner_key": "b", "inner_value": "val-b1" }, { "inner_key": "b", "inner_value": "val-b2" }, { "inner_key": "c", "inner_value": "val-c" } ], "a": [ "val-a" ], "b": [ "val-b1", "val-b2" ], "c": [ "val-c" ] }
Transformer for use case 2: flatten
is true
and flattenedElement
is first
[ "parseJSON": {}, "listToMap": { "source": "outer_key" "key": "inner_key", "valueKey": "inner_value", "flatten": true, "flattenedElement": "first" } ]
The transformed log event would be the following.
{ "outer_key": [ { "inner_key": "a", "inner_value": "val-a" }, { "inner_key": "b", "inner_value": "val-b1" }, { "inner_key": "b", "inner_value": "val-b2" }, { "inner_key": "c", "inner_value": "val-c" } ], "a": "val-a", "b": "val-b1", "c": "val-c" }
Transformer for use case 3: flatten
is true
and flattenedElement
is last
[ "parseJSON": {}, "listToMap": { "source": "outer_key" "key": "inner_key", "valueKey": "inner_value", "flatten": true, "flattenedElement": "last" } ]
The transformed log event would be the following.
{ "outer_key": [ { "inner_key": "a", "inner_value": "val-a" }, { "inner_key": "b", "inner_value": "val-b1" }, { "inner_key": "b", "inner_value": "val-b2" }, { "inner_key": "c", "inner_value": "val-c" } ], "a": "val-a", "b": "val-b2", "c": "val-c" }
Datatype converter processors
typeConverter
Use the typeConverter
processor to convert a value type associated with the specified key to the specified type. It's
a casting processor that changes the types of the specified fields. Values can be converted into one of the following datatypes: integer
,
double
, string
and boolean
.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
entries |
Array of entries. Each item in the array must contain key and type fields. |
Yes |
Maximum entries: 10 |
|
key |
The key with the value that is to be converted to a different type |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
type |
The type to convert to. Valid values are integer ,
double , string and boolean . |
Yes |
Example
Take the following example log event:
{ "name": "value", "status": "200" }
The transformer configuration is this, using typeConverter
with parseJSON
:
[ "parseJSON": {}, "typeConverter": { "entries": [ { "key": "status", "type": "integer" } ] } ]
The transformed log event would be the following.
{ "name": "value", "status": 200 }
datetimeConverter
Use the datetimeConverter
processor to convert a datetime string into a format that you specify.
Field | Description | Required? | Default | Limits |
---|---|---|---|---|
source |
The key to apply the date conversion to. |
Yes |
Maximum entries: 10 |
|
matchPatterns |
A list of patterns to match against the source field |
Yes |
Maximum entries: 5 |
|
target |
The JSON field to store the result in. |
Yes |
Maximum length: 128 Maximum nested key depth: 3 |
|
targetFormat |
The datetime format to use for the converted data in the target field. |
No |
|
Maximum length:64 |
sourceTimezone |
The time zone of the source field. |
No |
UTC |
Minimum length:1 |
targetTimezone |
The time zone of the target field. |
No |
UTC |
Minimum length:1 |
locale |
The locale of the source field. |
No |
Locale.ROOT |
Minimum length:1 |
Example
Take the following example log event:
{"german_datetime": "Samstag 05. Dezember 1998 11:00:00"}
The transformer configuration is this, using dateTimeConverter
with parseJSON
:
[ "parseJSON": {}, "dateTimeConverter": { "source": "german_datetime", "target": "target_1", "locale": "de", "matchPatterns": ["EEEE dd. MMMM yyyy HH:mm:ss"], "sourceTimezone": "Europe/Berlin", "targetTimezone": "America/New_York", "targetFormat": "yyyy-MM-dd'T'HH:mm:ss z" } ]
The transformed log event would be the following.
{ "german_datetime": "Samstag 05. Dezember 1998 11:00:00", "target_1": "1998-12-05T17:00:00 MEZ" }