Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.
Data protection policy A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.
Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. If you use
PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the
logs:Unmask permission can use a
GetLogEvents or
FilterLogEvents operation with the
unmask parameter set to
true to view the unmasked log events. Users with the
logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the
unmask query command.
For more information, including a list of types of data that can be audited and masked, see
Protect sensitive log data with masking.
To use the
PutAccountPolicy operation for a data protection policy, you must be signed on with the
logs:PutDataProtectionPolicy and
logs:PutAccountPolicy permissions.
The
PutAccountPolicy operation applies to all log groups in the account. You can use
PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
Subscription filter policy A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
- An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
- An Firehose data stream in the same account as the subscription policy, for same-account delivery.
- A Lambda function in the same account as the subscription policy, for same-account delivery.
- A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in
PolicyName. To perform a
PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the
iam:PassRole permission.
Transformer policy Creates or updates a
log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see
Processors that you can use.
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the
selectionCriteria parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with
my-log, you can't have another field index policy filtered to
my-logpprod or
my-logging.
You can also set up a transformer at the log-group level. For more information, see
PutTransformer. If there is both a log-group level transformer created with
PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
Field index policy You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see
Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the
GetLogGroupFields operation.
For example, suppose you have created a field index for
requestId. Then, any CloudWatch Logs Insights query on that log group that includes
requestId = value or
requestId in [value, value, ...] will attempt to process only the log events where the indexed field matches the specified value.
Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of
RequestId won't match a log event containing
requestId.
You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the
selectionCriteria parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with
my-log, you can't have another field index policy filtered to
my-logpprod or
my-logging.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you can use
PutIndexPolicy instead of
PutAccountPolicy. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with
PutAccountPolicy.