

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 数据密钥缓存
<a name="data-key-caching"></a>

*数据密钥缓存* 将[数据密钥](concepts.md#DEK)和[相关的加密材料](data-caching-details.md#cache-entries)存储在缓存中。加密或解密数据时，会在缓存中 AWS Encryption SDK 查找匹配的数据密钥。如果找到匹配项，它就使用缓存的数据密钥，而不是生成新的密钥。数据密钥缓存可以提高性能、降低成本，并且可以帮助您在应用程序扩展时保持在服务限制内。

在以下情况下，您的应用程序可以从数据密钥缓存中受益：
+ 应用程序可以重用数据密钥。
+ 应用程序生成大量数据密钥。
+ 您的加密操作过于缓慢、成本高、受限制或消耗大量资源。

缓存可以减少您对加密服务的使用，例如 AWS Key Management Service (AWS KMS)。如果您已达到[AWS KMS requests-per-second极限](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second)，缓存可以提供帮助。您的应用程序可以使用缓存的密钥来处理您的某些数据密钥请求，而不必调用 AWS KMS。（您还可以在 [AWS Support Center](https://console.aws.amazon.com/support/home#/) 中创建一个案例以提高您账户的限制。）

可 AWS Encryption SDK 帮助您创建和管理数据密钥缓存。该工具包提供一个[本地缓存](data-caching-details.md#simplecache)和[缓存加密材料管理器](data-caching-details.md#caching-cmm)（缓存 CMM），以便与缓存交互并实施您设置的[安全阈值](thresholds.md)。这些组件配合使用可以帮助您从重用数据密钥获得的高效率中受益，同时保持系统的安全性。

数据密钥缓存是的一项可选功能 AWS Encryption SDK ，您应谨慎使用。默认情况下，会为每个加密操作 AWS Encryption SDK 生成一个新的数据密钥。这种方法支持加密最佳实践，该最佳实践不建议过度重用数据密钥。通常，只有在需要满足性能目标时，才应使用数据密钥缓存。此外，还应使用数据密钥缓存[安全阈值](thresholds.md)，以确保您使用满足成本和性能目标所需的最小缓存量。

版本 3。 *x* AWS Encryption SDK for Java 仅支持带有传统主密钥提供程序接口的缓存 CMM，不支持密钥环接口。但是，版本 4。 .NET 的 * AWS Encryption SDK x* 及更高版本，版本 3。 的 *x* AWS Encryption SDK for Java，版本 4。 的 *x* AWS Encryption SDK for Python，版本 1。 Rust 和 0.1 版本的 *x*。 AWS Encryption SDK *x* 或更高版本的 fo AWS Encryption SDK r Go 支持[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。使用 AWS KMS 分层密钥环加密的内容只能使用分层密钥环进行解 AWS KMS 密。

有关这些安全折衷方案的详细讨论，请参阅 AWS 安全博客中的 [AWS Encryption SDK: How to Decide if Data Key Caching is Right for Your Application](https://aws.amazon.com/blogs/security/aws-encryption-sdk-how-to-decide-if-data-key-caching-is-right-for-your-application/)。

**Topics**
+ [如何使用数据密钥缓存](implement-caching.md)
+ [设置缓存安全阈值](thresholds.md)
+ [数据密钥缓存详细信息](data-caching-details.md)
+ [数据密钥缓存示例](sample-cache-example.md)

# 如何使用数据密钥缓存
<a name="implement-caching"></a>

本主题介绍如何在您的应用程序中使用数据密钥缓存。它将指导您逐步完成该过程。然后，它将这些步骤合并到一个简单示例中，该示例在操作中使用数据密钥缓存以加密字符串。

本节中的示例说明了如何使用 AWS Encryption SDK[版本 2.0.*x*](about-versions.md) 及更高版本。有关使用早期版本的示例，请在您的[编程语言 GitHub ](programming-languages.md)存储库的[版本](https://github.com/aws/aws-encryption-sdk-c/releases)列表中找到您的版本。

有关在中使用数据密钥缓存的完整且经过测试的示例 AWS Encryption SDK，请参阅：
+ C/C\$1\$1：[caching\$1cmm.cpp](https://github.com/aws/aws-encryption-sdk-c/blob/master/examples/caching_cmm.cpp)
+ Java：[SimpleDataKeyCachingExample.](https://github.com/aws/aws-encryption-sdk-java/blob/master/src/examples/java/com/amazonaws/crypto/examples/v2/SimpleDataKeyCachingExample.java) java
+ JavaScript 浏览器：[caching\$1cmm.ts](https://github.com/aws/aws-encryption-sdk-javascript/blob/master/modules/example-browser/src/caching_cmm.ts)
+ JavaScript Node.js: [caching\$1cmm.ts](https://github.com/aws/aws-encryption-sdk-javascript/blob/master/modules/example-node/src/caching_cmm.ts)
+ Python：[data\$1key\$1caching\$1basic.py](https://github.com/aws/aws-encryption-sdk-python/blob/master/examples/src/legacy/data_key_caching_basic.py)

[适用于 .NET 的AWS Encryption SDK](dot-net.md) 不支持数据密钥缓存。

**Topics**
+ [使用数据密钥缓存： Step-by-step](#implement-caching-steps)
+ [数据密钥缓存示例：加密字符串](#caching-example-encrypt-string)

## 使用数据密钥缓存： Step-by-step
<a name="implement-caching-steps"></a>

这些 step-by-step说明向您展示了如何创建实现数据密钥缓存所需的组件。
+ [创建数据密钥缓存](data-caching-details.md#simplecache)。在这些示例中，我们使用 AWS Encryption SDK 提供的本地缓存。我们将缓存限制为 10 个数据密钥。

   

------
#### [ C ]

  ```
  // Cache capacity (maximum number of entries) is required
  size_t cache_capacity = 10; 
  struct aws_allocator *allocator = aws_default_allocator();
  
  struct aws_cryptosdk_materials_cache *cache = aws_cryptosdk_materials_cache_local_new(allocator, cache_capacity);
  ```

------
#### [ Java ]

  以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

  ```
  // Cache capacity (maximum number of entries) is required
  int MAX_CACHE_SIZE = 10; 
  
  CryptoMaterialsCache cache = new LocalCryptoMaterialsCache(MAX_CACHE_SIZE);
  ```

------
#### [ JavaScript Browser ]

  ```
  const capacity = 10
  
  const cache = getLocalCryptographicMaterialsCache(capacity)
  ```

------
#### [ JavaScript Node.js ]

  ```
  const capacity = 10
  
  const cache = getLocalCryptographicMaterialsCache(capacity)
  ```

------
#### [ Python ]

  ```
  # Cache capacity (maximum number of entries) is required
  MAX_CACHE_SIZE = 10
  
  cache = aws_encryption_sdk.LocalCryptoMaterialsCache(MAX_CACHE_SIZE)
  ```

------

   
+ 创建[主密钥提供程序](concepts.md#master-key-provider)（Java 和 Python）或[密钥环](concepts.md#keyring)（C 和 JavaScript）。这些示例使用 AWS Key Management Service (AWS KMS) 主密钥提供程序或兼容的[AWS KMS 密钥环](use-kms-keyring.md)。

   

------
#### [ C ]

  ```
  // Create an AWS KMS keyring
  //   The input is the Amazon Resource Name (ARN) 
  //   of an AWS KMS key
  struct aws_cryptosdk_keyring *kms_keyring = Aws::Cryptosdk::KmsKeyring::Builder().Build(kms_key_arn);
  ```

------
#### [ Java ]

  以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

  ```
  // Create an AWS KMS master key provider
  //   The input is the Amazon Resource Name (ARN) 
  //   of an AWS KMS key
  MasterKeyProvider<KmsMasterKey> keyProvider = KmsMasterKeyProvider.builder().buildStrict(kmsKeyArn);
  ```

------
#### [ JavaScript Browser ]

  在浏览器中，您必须安全地注入凭证。该示例在 webpack (kms.webpack.config) 中定义凭证，webpack 将在运行时解析凭证。它通过 AWS KMS 客户端和证书创建 AWS KMS 客户端提供程序实例。然后，当它创建密钥环时，它会将客户端提供者与 AWS KMS key (`generatorKeyId)`一起传递给构造函数。

  ```
  const { accessKeyId, secretAccessKey, sessionToken } = credentials
  
  const clientProvider = getClient(KMS, {
      credentials: {
        accessKeyId,
        secretAccessKey,
        sessionToken
      }
    })
  
  /*  Create an AWS KMS keyring
   *  You must configure the AWS KMS keyring with at least one AWS KMS key
  *  The input is the Amazon Resource Name (ARN) 
   */ of an AWS KMS key
  const keyring = new KmsKeyringBrowser({
      clientProvider,
      generatorKeyId,
      keyIds,
    })
  ```

------
#### [ JavaScript Node.js ]

  ```
  /* Create an AWS KMS keyring
   *   The input is the Amazon Resource Name (ARN) 
  */   of an AWS KMS key
  const keyring = new KmsKeyringNode({ generatorKeyId })
  ```

------
#### [ Python ]

  ```
  # Create an AWS KMS master key provider
  #  The input is the Amazon Resource Name (ARN) 
  #  of an AWS KMS key
  key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[kms_key_arn])
  ```

------

   
+ [创建缓存加密材料管理器](data-caching-details.md#caching-cmm)（缓存 CMM）。

   

  将您的缓存 CMM 与缓存和主密钥提供程序或密钥环相关联。然后，在缓存 CMM 上[设置缓存安全阈值](thresholds.md)。

   

------
#### [ C ]

  在中 AWS Encryption SDK for C，您可以从底层 CMM（例如默认 CMM）或密钥环创建缓存 CMM。该示例从密钥环中创建缓存 CMM。

  在创建缓存 CMM 后，您可以释放对密钥环和缓存的引用。有关更多信息，请参阅 [引用计数](c-language-using.md#c-language-using-release)。

  ```
  // Create the caching CMM
  //   Set the partition ID to NULL.
  //   Set the required maximum age value to 60 seconds.
  struct aws_cryptosdk_cmm *caching_cmm = aws_cryptosdk_caching_cmm_new_from_keyring(allocator, cache, kms_keyring, NULL, 60, AWS_TIMESTAMP_SECS);
  
  // Add an optional message threshold
  //   The cached data key will not be used for more than 10 messages.
  aws_status = aws_cryptosdk_caching_cmm_set_limit_messages(caching_cmm, 10);
  
  // Release your references to the cache and the keyring.
  aws_cryptosdk_materials_cache_release(cache);
  aws_cryptosdk_keyring_release(kms_keyring);
  ```

------
#### [ Java ]

  以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 的 *x* AWS Encryption SDK for Java 不支持数据密钥缓存，但它支持[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

  ```
  /*
   * Security thresholds
   *   Max entry age is required. 
   *   Max messages (and max bytes) per entry are optional
   */
  int MAX_ENTRY_AGE_SECONDS = 60;
  int MAX_ENTRY_MSGS = 10;
         
  //Create a caching CMM
  CryptoMaterialsManager cachingCmm =
      CachingCryptoMaterialsManager.newBuilder().withMasterKeyProvider(keyProvider)
                                   .withCache(cache)
                                   .withMaxAge(MAX_ENTRY_AGE_SECONDS, TimeUnit.SECONDS)
                                   .withMessageUseLimit(MAX_ENTRY_MSGS)
                                   .build();
  ```

------
#### [ JavaScript Browser ]

  ```
  /*
   * Security thresholds
   *   Max age (in milliseconds) is required.
   *   Max messages (and max bytes) per entry are optional.
   */
  const maxAge = 1000 * 60
  const maxMessagesEncrypted = 10
  
  /* Create a caching CMM from a keyring  */
  const cachingCmm = new WebCryptoCachingMaterialsManager({
    backingMaterials: keyring,
    cache,
    maxAge,
    maxMessagesEncrypted
  })
  ```

------
#### [ JavaScript Node.js ]

  ```
  /*
   * Security thresholds
   *   Max age (in milliseconds) is required.
   *   Max messages (and max bytes) per entry are optional.
   */
  const maxAge = 1000 * 60
  const maxMessagesEncrypted = 10
  
  /* Create a caching CMM from a keyring  */
  const cachingCmm = new NodeCachingMaterialsManager({
    backingMaterials: keyring,
    cache,
    maxAge,
    maxMessagesEncrypted
  })
  ```

------
#### [ Python ]

  ```
  # Security thresholds
  #   Max entry age is required. 
  #   Max messages (and max bytes) per entry are optional
  #
  MAX_ENTRY_AGE_SECONDS = 60.0
  MAX_ENTRY_MESSAGES = 10
         
  # Create a caching CMM
  caching_cmm = CachingCryptoMaterialsManager(
      master_key_provider=key_provider,
      cache=cache,
      max_age=MAX_ENTRY_AGE_SECONDS,
      max_messages_encrypted=MAX_ENTRY_MESSAGES
  )
  ```

------

这是您需要执行的全部操作。然后，让他们为您 AWS Encryption SDK 管理缓存，或者添加您自己的缓存管理逻辑。

如果要在加密或解密数据的调用中使用数据密钥缓存，请指定您的缓存 CMM 而不是主密钥提供程序或其他 CMM。

**注意**  
如果要加密数据流或任何未知大小的数据，请务必在请求中指定数据大小。加密大小 AWS Encryption SDK 未知的数据时，不使用数据密钥缓存。

------
#### [ C ]

在中 AWS Encryption SDK for C，您可以创建与缓存 CMM 的会话，然后处理该会话。

默认情况下，当消息大小未知且不受限制时， AWS Encryption SDK 不缓存数据密钥。要允许在不知道确切数据大小时缓存，请使用 `aws_cryptosdk_session_set_message_bound` 方法设置消息的最大大小。设置大于估计消息大小的边界。如果实际消息大小超出边界，加密操作就会失败。

```
/* Create a session with the caching CMM. Set the session mode to encrypt. */
struct aws_cryptosdk_session *session = aws_cryptosdk_session_new_from_cmm_2(allocator, AWS_CRYPTOSDK_ENCRYPT, caching_cmm);

/* Set a message bound of 1000 bytes */
aws_status = aws_cryptosdk_session_set_message_bound(session, 1000);

/* Encrypt the message using the session with the caching CMM */
aws_status = aws_cryptosdk_session_process(
             session, output_buffer, output_capacity, &output_produced, input_buffer, input_len, &input_consumed);

/* Release your references to the caching CMM and the session. */
aws_cryptosdk_cmm_release(caching_cmm);
aws_cryptosdk_session_destroy(session);
```

------
#### [ Java ]

以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

```
// When the call to encryptData specifies a caching CMM,
// the encryption operation uses the data key cache
final AwsCrypto encryptionSdk = AwsCrypto.standard();
return encryptionSdk.encryptData(cachingCmm, plaintext_source).getResult();
```

------
#### [ JavaScript Browser ]

```
const { result } = await encrypt(cachingCmm, plaintext)
```

------
#### [ JavaScript Node.js ]

在 for Node.js 中使用缓存 CMM 时，该`encrypt`方法需要纯文本的长度。 AWS Encryption SDK for JavaScript 如果未提供，则不会缓存数据密钥。如果提供了长度，但提供的明文数据超过该长度，加密操作将会失败。如果您不知道明文的确切长度（例如，在流式传输数据时），请提供最大的预期值。

```
const { result } = await encrypt(cachingCmm, plaintext, { plaintextLength: plaintext.length })
```

------
#### [ Python ]

```
# Set up an encryption client
client = aws_encryption_sdk.EncryptionSDKClient()

# When the call to encrypt specifies a caching CMM,
# the encryption operation uses the data key cache
#
encrypted_message, header = client.encrypt(
    source=plaintext_source,
    materials_manager=caching_cmm
)
```

------

## 数据密钥缓存示例：加密字符串
<a name="caching-example-encrypt-string"></a>

在加密字符串时，该简单代码示例使用数据密钥缓存。它将[step-by-step 过程](#implement-caching-steps)中的代码组合成可以运行的测试代码。

该示例为 AWS KMS key创建[本地缓存](data-caching-details.md#simplecache)和[主密钥提供程序](concepts.md#master-key-provider)或[密钥环](concepts.md#keyring)。然后，使用本地缓存和主密钥提供程序或密钥环创建一个具有相应[安全阈值](thresholds.md)的缓存 CMM。在 Java 和 Python 中，加密请求指定缓存 CMM、要加密的明文数据以及[加密上下文](data-caching-details.md#caching-encryption-context)。在 C 中，缓存 CMM 是在会话中指定的，并向加密请求提供会话。

要运行这些示例，您需要提供 [AWS KMS key的 Amazon 资源名称（ARN）](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html)。确保您[有权使用 AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-users) 以生成数据密钥。

有关创建和使用数据密钥缓存的更详细真实示例，请参阅 [数据密钥缓存示例代码](sample-cache-example-code.md)。

------
#### [ C ]

```
/*
 * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 *
 * Licensed under the Apache License, Version 2.0 (the "License"). You may not use
 * this file except in compliance with the License. A copy of the License is
 * located at
 *
 *     http://aws.amazon.com/apache2.0/
 *
 * or in the "license" file accompanying this file. This file is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
 * implied. See the License for the specific language governing permissions and
 * limitations under the License.
 */

#include <aws/cryptosdk/cache.h>
#include <aws/cryptosdk/cpp/kms_keyring.h>
#include <aws/cryptosdk/session.h>

void encrypt_with_caching(
    uint8_t *ciphertext,     // output will go here (assumes ciphertext_capacity bytes already allocated)
    size_t *ciphertext_len,  // length of output will go here
    size_t ciphertext_capacity,
    const char *kms_key_arn,
    int max_entry_age,
    int cache_capacity) {
    const uint64_t MAX_ENTRY_MSGS = 100;

    struct aws_allocator *allocator = aws_default_allocator();
    
    // Load error strings for debugging
    aws_cryptosdk_load_error_strings();

    // Create a keyring
    struct aws_cryptosdk_keyring *kms_keyring = Aws::Cryptosdk::KmsKeyring::Builder().Build(kms_key_arn);

    // Create a cache
    struct aws_cryptosdk_materials_cache *cache = aws_cryptosdk_materials_cache_local_new(allocator, cache_capacity);

    // Create a caching CMM
    struct aws_cryptosdk_cmm *caching_cmm = aws_cryptosdk_caching_cmm_new_from_keyring(
        allocator, cache, kms_keyring, NULL, max_entry_age, AWS_TIMESTAMP_SECS);
    if (!caching_cmm) abort();

    if (aws_cryptosdk_caching_cmm_set_limit_messages(caching_cmm, MAX_ENTRY_MSGS)) abort();

    // Create a session
    struct aws_cryptosdk_session *session =        
        aws_cryptosdk_session_new_from_cmm_2(allocator, AWS_CRYPTOSDK_ENCRYPT, caching_cmm);
    if (!session) abort();

    // Encryption context
    struct aws_hash_table *enc_ctx = aws_cryptosdk_session_get_enc_ctx_ptr_mut(session);
    if (!enc_ctx) abort();
    AWS_STATIC_STRING_FROM_LITERAL(enc_ctx_key, "purpose");
    AWS_STATIC_STRING_FROM_LITERAL(enc_ctx_value, "test");
    if (aws_hash_table_put(enc_ctx, enc_ctx_key, (void *)enc_ctx_value, NULL)) abort();

    // Plaintext data to be encrypted
    const char *my_data = "My plaintext data";
    size_t my_data_len  = strlen(my_data);
    if (aws_cryptosdk_session_set_message_size(session, my_data_len)) abort();

    // When the session uses a caching CMM, the encryption operation uses the data key cache
    // specified in the caching CMM.
    size_t bytes_read;
    if (aws_cryptosdk_session_process(
            session,
            ciphertext,
            ciphertext_capacity,
            ciphertext_len,
            (const uint8_t *)my_data,
            my_data_len,
            &bytes_read))
        abort();
    if (!aws_cryptosdk_session_is_done(session) || bytes_read != my_data_len) abort();

    aws_cryptosdk_session_destroy(session);
    aws_cryptosdk_cmm_release(caching_cmm);
    aws_cryptosdk_materials_cache_release(cache);
    aws_cryptosdk_keyring_release(kms_keyring);
}
```

------
#### [ Java ]

以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

```
// Copyright Amazon.com Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

package com.amazonaws.crypto.examples;

import com.amazonaws.encryptionsdk.AwsCrypto;
import com.amazonaws.encryptionsdk.CryptoMaterialsManager;
import com.amazonaws.encryptionsdk.MasterKeyProvider;
import com.amazonaws.encryptionsdk.caching.CachingCryptoMaterialsManager;
import com.amazonaws.encryptionsdk.caching.CryptoMaterialsCache;
import com.amazonaws.encryptionsdk.caching.LocalCryptoMaterialsCache;
import com.amazonaws.encryptionsdk.kmssdkv2.KmsMasterKey;
import com.amazonaws.encryptionsdk.kmssdkv2.KmsMasterKeyProvider;
import java.nio.charset.StandardCharsets;
import java.util.Collections;
import java.util.Map;
import java.util.concurrent.TimeUnit;

/**
 * <p>
 * Encrypts a string using an &KMS; key and data key caching
 *
 * <p>
 * Arguments:
 * <ol>
 * <li>KMS Key ARN: To find the Amazon Resource Name of your &KMS; key,
 *     see 'Find the key ID and ARN' at https://docs.aws.amazon.com/kms/latest/developerguide/find-cmk-id-arn.html
 * <li>Max entry age: Maximum time (in seconds) that a cached entry can be used
 * <li>Cache capacity: Maximum number of entries in the cache
 * </ol>
 */
public class SimpleDataKeyCachingExample {

    /*
     * Security thresholds
     *   Max entry age is required.
     *   Max messages (and max bytes) per data key are optional
     */
    private static final int MAX_ENTRY_MSGS = 100;

    public static byte[] encryptWithCaching(String kmsKeyArn, int maxEntryAge, int cacheCapacity) {
        // Plaintext data to be encrypted
        byte[] myData = "My plaintext data".getBytes(StandardCharsets.UTF_8);

        // Encryption context
        // Most encrypted data should have an associated encryption context
        // to protect integrity. This sample uses placeholder values.
        // For more information see:
        // blogs.aws.amazon.com/security/post/Tx2LZ6WBJJANTNW/How-to-Protect-the-Integrity-of-Your-Encrypted-Data-by-Using-AWS-Key-Management
        final Map<String, String> encryptionContext = Collections.singletonMap("purpose", "test");

        // Create a master key provider
        MasterKeyProvider<KmsMasterKey> keyProvider = KmsMasterKeyProvider.builder()
            .buildStrict(kmsKeyArn);

        // Create a cache
        CryptoMaterialsCache cache = new LocalCryptoMaterialsCache(cacheCapacity);

        // Create a caching CMM
        CryptoMaterialsManager cachingCmm =
            CachingCryptoMaterialsManager.newBuilder().withMasterKeyProvider(keyProvider)
                .withCache(cache)
                .withMaxAge(maxEntryAge, TimeUnit.SECONDS)
                .withMessageUseLimit(MAX_ENTRY_MSGS)
                .build();

        // When the call to encryptData specifies a caching CMM,
        // the encryption operation uses the data key cache
        final AwsCrypto encryptionSdk = AwsCrypto.standard();
        return encryptionSdk.encryptData(cachingCmm, myData, encryptionContext).getResult();
    }
}
```

------
#### [ JavaScript Browser ]

```
// Copyright Amazon.com Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

/* This is a simple example of using a caching CMM with a KMS keyring
 * to encrypt and decrypt using the AWS Encryption SDK for Javascript in a browser.
 */

import {
  KmsKeyringBrowser,
  KMS,
  getClient,
  buildClient,
  CommitmentPolicy,
  WebCryptoCachingMaterialsManager,
  getLocalCryptographicMaterialsCache,
} from '@aws-crypto/client-browser'
import { toBase64 } from '@aws-sdk/util-base64-browser'

/* This builds the client with the REQUIRE_ENCRYPT_REQUIRE_DECRYPT commitment policy,
 * which enforces that this client only encrypts using committing algorithm suites
 * and enforces that this client
 * will only decrypt encrypted messages
 * that were created with a committing algorithm suite.
 * This is the default commitment policy
 * if you build the client with `buildClient()`.
 */
const { encrypt, decrypt } = buildClient(
  CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT
)

/* This is injected by webpack.
 * The webpack.DefinePlugin or @aws-sdk/karma-credential-loader will replace the values when bundling.
 * The credential values are pulled from @aws-sdk/credential-provider-node
 * Use any method you like to get credentials into the browser.
 * See kms.webpack.config
 */
declare const credentials: {
  accessKeyId: string
  secretAccessKey: string
  sessionToken: string
}

/* This is done to facilitate testing. */
export async function testCachingCMMExample() {
  /* This example uses an &KMS; keyring. The generator key in a &KMS; keyring generates and encrypts the data key.
   * The caller needs kms:GenerateDataKey permission on the &KMS; key in generatorKeyId.
   */
  const generatorKeyId =
    'arn:aws:kms:us-west-2:658956600833:alias/EncryptDecrypt'

  /* Adding additional KMS keys that can decrypt.
   * The caller must have kms:Encrypt permission for every &KMS; key in keyIds.
   * You might list several keys in different AWS Regions.
   * This allows you to decrypt the data in any of the represented Regions.
   * In this example, the generator key
   * and the additional key are actually the same &KMS; key.
   * In `generatorId`, this &KMS; key is identified by its alias ARN.
   * In `keyIds`, this &KMS; key is identified by its key ARN.
   * In practice, you would specify different &KMS; keys,
   * or omit the `keyIds` parameter.
   * This is *only* to demonstrate how the &KMS; key ARNs are configured.
   */
  const keyIds = [
    'arn:aws:kms:us-west-2:658956600833:key/b3537ef1-d8dc-4780-9f5a-55776cbb2f7f',
  ]

  /* Need a client provider that will inject correct credentials.
   * The credentials here are injected by webpack from your environment bundle is created
   * The credential values are pulled using @aws-sdk/credential-provider-node.
   * See kms.webpack.config
   * You should inject your credential into the browser in a secure manner
   * that works with your application.
   */
  const { accessKeyId, secretAccessKey, sessionToken } = credentials

  /* getClient takes a KMS client constructor
   * and optional configuration values.
   * The credentials can be injected here,
   * because browsers do not have a standard credential discovery process the way Node.js does.
   */
  const clientProvider = getClient(KMS, {
    credentials: {
      accessKeyId,
      secretAccessKey,
      sessionToken,
    },
  })

  /* You must configure the KMS keyring with your &KMS; keys */
  const keyring = new KmsKeyringBrowser({
    clientProvider,
    generatorKeyId,
    keyIds,
  })

  /* Create a cache to hold the data keys (and related cryptographic material).
   * This example uses the local cache provided by the Encryption SDK.
   * The `capacity` value represents the maximum number of entries
   * that the cache can hold.
   * To make room for an additional entry,
   * the cache evicts the oldest cached entry.
   * Both encrypt and decrypt requests count independently towards this threshold.
   * Entries that exceed any cache threshold are actively removed from the cache.
   * By default, the SDK checks one item in the cache every 60 seconds (60,000 milliseconds).
   * To change this frequency, pass in a `proactiveFrequency` value
   * as the second parameter. This value is in milliseconds.
   */
  const capacity = 100
  const cache = getLocalCryptographicMaterialsCache(capacity)

  /* The partition name lets multiple caching CMMs share the same local cryptographic cache.
   * By default, the entries for each CMM are cached separately. However, if you want these CMMs to share the cache,
   * use the same partition name for both caching CMMs.
   * If you don't supply a partition name, the Encryption SDK generates a random name for each caching CMM.
   * As a result, sharing elements in the cache MUST be an intentional operation.
   */
  const partition = 'local partition name'

  /* maxAge is the time in milliseconds that an entry will be cached.
   * Elements are actively removed from the cache.
   */
  const maxAge = 1000 * 60

  /* The maximum number of bytes that will be encrypted under a single data key.
   * This value is optional,
   * but you should configure the lowest practical value.
   */
  const maxBytesEncrypted = 100

  /* The maximum number of messages that will be encrypted under a single data key.
   * This value is optional,
   * but you should configure the lowest practical value.
   */
  const maxMessagesEncrypted = 10

  const cachingCMM = new WebCryptoCachingMaterialsManager({
    backingMaterials: keyring,
    cache,
    partition,
    maxAge,
    maxBytesEncrypted,
    maxMessagesEncrypted,
  })

  /* Encryption context is a *very* powerful tool for controlling
   * and managing access.
   * When you pass an encryption context to the encrypt function,
   * the encryption context is cryptographically bound to the ciphertext.
   * If you don't pass in the same encryption context when decrypting,
   * the decrypt function fails.
   * The encryption context is ***not*** secret!
   * Encrypted data is opaque.
   * You can use an encryption context to assert things about the encrypted data.
   * The encryption context helps you to determine
   * whether the ciphertext you retrieved is the ciphertext you expect to decrypt.
   * For example, if you are are only expecting data from 'us-west-2',
   * the appearance of a different AWS Region in the encryption context can indicate malicious interference.
   * See: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/concepts.html#encryption-context
   *
   * Also, cached data keys are reused ***only*** when the encryption contexts passed into the functions are an exact case-sensitive match.
   * See: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/data-caching-details.html#caching-encryption-context
   */
  const encryptionContext = {
    stage: 'demo',
    purpose: 'simple demonstration app',
    origin: 'us-west-2',
  }

  /* Find data to encrypt. */
  const plainText = new Uint8Array([1, 2, 3, 4, 5])

  /* Encrypt the data.
   * The caching CMM only reuses data keys
   * when it know the length (or an estimate) of the plaintext.
   * However, in the browser,
   * you must provide all of the plaintext to the encrypt function.
   * Therefore, the encrypt function in the browser knows the length of the plaintext
   * and does not accept a plaintextLength option.
   */
  const { result } = await encrypt(cachingCMM, plainText, { encryptionContext })

  /* Log the plain text
   * only for testing and to show that it works.
   */
  console.log('plainText:', plainText)
  document.write('</br>plainText:' + plainText + '</br>')

  /* Log the base64-encoded result
   * so that you can try decrypting it with another AWS Encryption SDK implementation.
   */
  const resultBase64 = toBase64(result)
  console.log(resultBase64)
  document.write(resultBase64)

  /* Decrypt the data.
   * NOTE: This decrypt request will not use the data key
   * that was cached during the encrypt operation.
   * Data keys for encrypt and decrypt operations are cached separately.
   */
  const { plaintext, messageHeader } = await decrypt(cachingCMM, result)

  /* Grab the encryption context so you can verify it. */
  const { encryptionContext: decryptedContext } = messageHeader

  /* Verify the encryption context.
   * If you use an algorithm suite with signing,
   * the Encryption SDK adds a name-value pair to the encryption context that contains the public key.
   * Because the encryption context might contain additional key-value pairs,
   * do not include a test that requires that all key-value pairs match.
   * Instead, verify that the key-value pairs that you supplied to the `encrypt` function are included in the encryption context that the `decrypt` function returns.
   */
  Object.entries(encryptionContext).forEach(([key, value]) => {
    if (decryptedContext[key] !== value)
      throw new Error('Encryption Context does not match expected values')
  })

  /* Log the clear message
   * only for testing and to show that it works.
   */
  document.write('</br>Decrypted:' + plaintext)
  console.log(plaintext)

  /* Return the values to make testing easy. */
  return { plainText, plaintext }
}
```

------
#### [ JavaScript Node.js ]

```
// Copyright Amazon.com Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

import {
  KmsKeyringNode,
  buildClient,
  CommitmentPolicy,
  NodeCachingMaterialsManager,
  getLocalCryptographicMaterialsCache,
} from '@aws-crypto/client-node'

/* This builds the client with the REQUIRE_ENCRYPT_REQUIRE_DECRYPT commitment policy,
 * which enforces that this client only encrypts using committing algorithm suites
 * and enforces that this client
 * will only decrypt encrypted messages
 * that were created with a committing algorithm suite.
 * This is the default commitment policy
 * if you build the client with `buildClient()`.
 */
const { encrypt, decrypt } = buildClient(
  CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT
)

export async function cachingCMMNodeSimpleTest() {
  /* An &KMS; key is required to generate the data key.
   * You need kms:GenerateDataKey permission on the &KMS; key in generatorKeyId.
   */
  const generatorKeyId =
    'arn:aws:kms:us-west-2:658956600833:alias/EncryptDecrypt'

  /* Adding alternate &KMS; keys that can decrypt.
   * Access to kms:Encrypt is required for every &KMS; key in keyIds.
   * You might list several keys in different AWS Regions.
   * This allows you to decrypt the data in any of the represented Regions.
   * In this example, the generator key
   * and the additional key are actually the same &KMS; key.
   * In `generatorId`, this &KMS; key is identified by its alias ARN.
   * In `keyIds`, this &KMS; key is identified by its key ARN.
   * In practice, you would specify different &KMS; keys,
   * or omit the `keyIds` parameter.
   * This is *only* to demonstrate how the &KMS; key ARNs are configured.
   */
  const keyIds = [
    'arn:aws:kms:us-west-2:658956600833:key/b3537ef1-d8dc-4780-9f5a-55776cbb2f7f',
  ]

  /* The &KMS; keyring must be configured with the desired &KMS; keys
   * This example passes the keyring to the caching CMM
   * instead of using it directly.
   */
  const keyring = new KmsKeyringNode({ generatorKeyId, keyIds })

  /* Create a cache to hold the data keys (and related cryptographic material).
   * This example uses the local cache provided by the Encryption SDK.
   * The `capacity` value represents the maximum number of entries
   * that the cache can hold.
   * To make room for an additional entry,
   * the cache evicts the oldest cached entry.
   * Both encrypt and decrypt requests count independently towards this threshold.
   * Entries that exceed any cache threshold are actively removed from the cache.
   * By default, the SDK checks one item in the cache every 60 seconds (60,000 milliseconds).
   * To change this frequency, pass in a `proactiveFrequency` value
   * as the second parameter. This value is in milliseconds.
   */
  const capacity = 100
  const cache = getLocalCryptographicMaterialsCache(capacity)

  /* The partition name lets multiple caching CMMs share the same local cryptographic cache.
   * By default, the entries for each CMM are cached separately. However, if you want these CMMs to share the cache,
   * use the same partition name for both caching CMMs.
   * If you don't supply a partition name, the Encryption SDK generates a random name for each caching CMM.
   * As a result, sharing elements in the cache MUST be an intentional operation.
   */
  const partition = 'local partition name'

  /* maxAge is the time in milliseconds that an entry will be cached.
   * Elements are actively removed from the cache.
   */
  const maxAge = 1000 * 60

  /* The maximum amount of bytes that will be encrypted under a single data key.
   * This value is optional,
   * but you should configure the lowest value possible.
   */
  const maxBytesEncrypted = 100

  /* The maximum number of messages that will be encrypted under a single data key.
   * This value is optional,
   * but you should configure the lowest value possible.
   */
  const maxMessagesEncrypted = 10

  const cachingCMM = new NodeCachingMaterialsManager({
    backingMaterials: keyring,
    cache,
    partition,
    maxAge,
    maxBytesEncrypted,
    maxMessagesEncrypted,
  })

  /* Encryption context is a *very* powerful tool for controlling
   * and managing access.
   * When you pass an encryption context to the encrypt function,
   * the encryption context is cryptographically bound to the ciphertext.
   * If you don't pass in the same encryption context when decrypting,
   * the decrypt function fails.
   * The encryption context is ***not*** secret!
   * Encrypted data is opaque.
   * You can use an encryption context to assert things about the encrypted data.
   * The encryption context helps you to determine
   * whether the ciphertext you retrieved is the ciphertext you expect to decrypt.
   * For example, if you are are only expecting data from 'us-west-2',
   * the appearance of a different AWS Region in the encryption context can indicate malicious interference.
   * See: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/concepts.html#encryption-context
   *
   * Also, cached data keys are reused ***only*** when the encryption contexts passed into the functions are an exact case-sensitive match.
   * See: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/data-caching-details.html#caching-encryption-context
   */
  const encryptionContext = {
    stage: 'demo',
    purpose: 'simple demonstration app',
    origin: 'us-west-2',
  }

  /* Find data to encrypt.  A simple string. */
  const cleartext = 'asdf'

  /* Encrypt the data.
   * The caching CMM only reuses data keys
   * when it know the length (or an estimate) of the plaintext.
   * If you do not know the length,
   * because the data is a stream
   * provide an estimate of the largest expected value.
   *
   * If your estimate is smaller than the actual plaintext length
   * the AWS Encryption SDK will throw an exception.
   *
   * If the plaintext is not a stream,
   * the AWS Encryption SDK uses the actual plaintext length
   * instead of any length you provide.
   */
  const { result } = await encrypt(cachingCMM, cleartext, {
    encryptionContext,
    plaintextLength: 4,
  })

  /* Decrypt the data.
   * NOTE: This decrypt request will not use the data key
   * that was cached during the encrypt operation.
   * Data keys for encrypt and decrypt operations are cached separately.
   */
  const { plaintext, messageHeader } = await decrypt(cachingCMM, result)

  /* Grab the encryption context so you can verify it. */
  const { encryptionContext: decryptedContext } = messageHeader

  /* Verify the encryption context.
   * If you use an algorithm suite with signing,
   * the Encryption SDK adds a name-value pair to the encryption context that contains the public key.
   * Because the encryption context might contain additional key-value pairs,
   * do not include a test that requires that all key-value pairs match.
   * Instead, verify that the key-value pairs that you supplied to the `encrypt` function are included in the encryption context that the `decrypt` function returns.
   */
  Object.entries(encryptionContext).forEach(([key, value]) => {
    if (decryptedContext[key] !== value)
      throw new Error('Encryption Context does not match expected values')
  })

  /* Return the values so the code can be tested. */
  return { plaintext, result, cleartext, messageHeader }
}
```

------
#### [ Python ]

```
# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Example of encryption with data key caching."""
import aws_encryption_sdk
from aws_encryption_sdk import CommitmentPolicy


def encrypt_with_caching(kms_key_arn, max_age_in_cache, cache_capacity):
    """Encrypts a string using an &KMS; key and data key caching.

    :param str kms_key_arn: Amazon Resource Name (ARN) of the &KMS; key
    :param float max_age_in_cache: Maximum time in seconds that a cached entry can be used
    :param int cache_capacity: Maximum number of entries to retain in cache at once
    """
    # Data to be encrypted
    my_data = "My plaintext data"

    # Security thresholds
    #   Max messages (or max bytes per) data key are optional
    MAX_ENTRY_MESSAGES = 100

    # Create an encryption context
    encryption_context = {"purpose": "test"}

    # Set up an encryption client with an explicit commitment policy. Note that if you do not explicitly choose a
    # commitment policy, REQUIRE_ENCRYPT_REQUIRE_DECRYPT is used by default.
    client = aws_encryption_sdk.EncryptionSDKClient(commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT)

    # Create a master key provider for the &KMS; key
    key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[kms_key_arn])

    # Create a local cache
    cache = aws_encryption_sdk.LocalCryptoMaterialsCache(cache_capacity)

    # Create a caching CMM
    caching_cmm = aws_encryption_sdk.CachingCryptoMaterialsManager(
        master_key_provider=key_provider,
        cache=cache,
        max_age=max_age_in_cache,
        max_messages_encrypted=MAX_ENTRY_MESSAGES,
    )

    # When the call to encrypt data specifies a caching CMM,
    # the encryption operation uses the data key cache specified
    # in the caching CMM
    encrypted_message, _header = client.encrypt(
        source=my_data, materials_manager=caching_cmm, encryption_context=encryption_context
    )

    return encrypted_message
```

------

# 设置缓存安全阈值
<a name="thresholds"></a>

在实施数据密钥缓存时，您需要配置[缓存 CMM](data-caching-details.md#caching-cmm) 强制执行的安全阈值。

安全阈值有助于限制每个缓存的数据密钥的使用时间长度，以及每个数据密钥保护的数据量。只有在缓存条目符合所有安全阈值时，缓存 CMM 才会返回缓存的数据密钥。如果缓存条目超过任何阈值，则不会在当前操作中使用该条目，并会尽快将其从缓存中逐出。每个数据密钥的第一次使用（在缓存之前）都不计入这些阈值内。

一般来说，请使用满足您的成本和性能目标所需的最低缓存量。

 AWS Encryption SDK 仅缓存使用密钥[派生函数加密的数据密钥](https://en.wikipedia.org/wiki/Key_derivation_function)。此外，它还为某些阈值设置了上限。这些限制确保数据密钥的重用次数不会超过其加密限制。不过，由于缓存的是明文数据密钥（默认在内存中），请尽量缩短保存这些密钥的时间。此外，还要尽量限制在密钥泄露时可能会泄露的数据。

有关设置缓存安全阈值的示例，请参[AWS Encryption SDK阅 “ AWS 安全博客” 中的 “如何确定数据密钥缓存是否适合您的应用程序](https://aws.amazon.com/blogs/security/aws-encryption-sdk-how-to-decide-if-data-key-caching-is-right-for-your-application/)”。

**注意**  
缓存 CMM 实施所有以下阈值。如果未指定可选的值，则缓存 CMM 使用默认值。  
要临时禁用数据密钥缓存， AWS Encryption SDK 的 Java 和 Python 实现提供一个*空加密材料缓存*（空缓存）。空缓存为每个 `GET` 请求返回未命中，并且不响应 `PUT` 请求。建议您使用空缓存，而不是将[缓存容量](data-caching-details.md#simplecache)或安全阈值设置为 0。有关更多信息，请参阅 [Java](https://aws.github.io/aws-encryption-sdk-java/com/amazonaws/encryptionsdk/caching/NullCryptoMaterialsCache.html) 和 [Python](https://aws-encryption-sdk-python.readthedocs.io/en/latest/generated/aws_encryption_sdk.caches.null.html) 中的空缓存。

**最长使用期限（必需）**  
确定缓存的条目的使用时长，从添加该条目时算起。该值为必填项。请输入一个大于 0 的值。 AWS Encryption SDK 不限制最大年龄值。  
的所有语言实现都 AWS Encryption SDK 定义了以秒为单位的最大年龄，但使用毫秒的 AWS Encryption SDK for JavaScript除外。  
请尽可能使用最短的时间间隔，但前提是您的应用程序仍能从缓存中受益。您可以像密钥轮换策略一样使用最长使用期限阈值。可以使用该值限制数据密钥重用次数，最大限度减少加密材料泄露，以及逐出在缓存期间策略可能已发生变化的数据密钥。

**加密的最大消息数（可选）**  
指定缓存的数据密钥可以加密的最大消息数。该值为可选项。请输入 1 到 2^32 之间的值。默认值为 2^32 个消息。  
将每个缓存的密钥保护的消息数设置得足够大可以从重用中受益，但设置得足够小则可以限制在密钥泄露时可能会泄露的消息数。

**加密的最大字节数（可选）**  
指定缓存的数据密钥可以加密的最大字节数。该值为可选项。请输入 0 到 2^63 - 1 之间的值。默认值为 2^63 - 1。在使用值 0 时，您只能在加密空消息字符串时使用数据密钥缓存。  
在评估该阈值时，将包括当前请求中的字节数。如果已处理的字节数加上当前的字节数超过该阈值，即便是可能用于较小的请求，缓存的数据密钥也会从缓存中被逐出。

# 数据密钥缓存详细信息
<a name="data-caching-details"></a>

大多数应用程序可以使用默认数据密钥缓存实施，而无需编写自定义代码。本节介绍了默认实施以及有关选项的一些详细信息。

**Topics**
+ [数据密钥缓存的工作方式](#how-caching-works)
+ [创建加密材料缓存](#simplecache)
+ [创建缓存加密材料管理器](#caching-cmm)
+ [在数据密钥缓存条目中包含哪些内容？](#cache-entries)
+ [加密上下文：如何选择缓存条目](#caching-encryption-context)
+ [我的应用程序是否使用缓存的数据密钥？](#caching-effect)

## 数据密钥缓存的工作方式
<a name="how-caching-works"></a>

在加密或解密数据的请求中使用数据密钥缓存时， AWS Encryption SDK 先在缓存中搜索与请求匹配的数据密钥。如果找到有效的匹配项，它使用缓存的数据密钥加密数据。否则，它生成新的数据密钥，就像没有缓存一样。

不会在未知大小的数据中使用数据密钥缓存，如流式数据。这样缓存 CMM 就可以正确强制执行[最大字节数阈值](thresholds.md)。为了避免该行为，请将消息大小添加到加密请求中。

除缓存外，数据密钥缓存还使用[缓存加密材料管理器](#caching-cmm)（缓存 CMM）。缓存 CMM 是一种专用的[加密材料管理器（CMM）](concepts.md#crypt-materials-manager)，该管理器与[缓存](#simplecache)和基础 [CMM](concepts.md#crypt-materials-manager) 进行交互。（在指定[主密钥提供程序](concepts.md#master-key-provider)或密钥环时， AWS Encryption SDK 将创建一个默认 CMM。） 缓存 CMM 缓存其基础 CMM 返回的数据密钥。缓存 CMM 还强制执行您设置的缓存安全阈值。

为了防止从缓存中选择错误的数据密钥，所有兼容的缓存都 CMMs 要求缓存的加密材料的以下属性与材料请求相匹配。
+ [算法套件](concepts.md#crypto-algorithm)
+ [加密上下文](#caching-encryption-context)（甚至为空时）
+ 分区名称（用于标识缓存 CMM 的字符串）
+ （仅解密）加密数据密钥

**注意**  
仅当[算法套件](concepts.md#crypto-algorithm)使用密钥[派生函数时，才会 AWS Encryption SDK 缓存数据密钥](https://en.wikipedia.org/wiki/Key_derivation_function)。

以下工作流介绍了如何在缓存和不缓存数据密钥的情况下处理数据加密请求。这些工作流说明了如何在该过程中使用您创建的缓存组件，包括缓存和缓存 CMM。

### 加密数据而不进行缓存
<a name="workflow-wo-cache"></a>

获取加密材料而不缓存：

1. 应用程序要求对数据 AWS Encryption SDK 进行加密。

   该请求指定一个主密钥提供程序或密钥环。 AWS Encryption SDK 创建一个与主密钥提供程序或密钥环交互的默认 CMM。

1. 向 CMM AWS Encryption SDK 索要加密材料（获取加密材料）。

1. CMM 要求其[密钥环](concepts.md#keyring)（C 和 JavaScript）或[主密钥提供者](concepts.md#master-key-provider)（Java 和 Python）提供加密材料。这可能涉及调用加密服务，例如 AWS Key Management Service (AWS KMS)。CMM 将加密材料返回给 AWS Encryption SDK。

1.  AWS Encryption SDK 使用纯文本数据密钥对数据进行加密。它在返回给用户的[加密消息](concepts.md#message)中存储加密数据和加密数据密钥。

![\[加密数据而不进行缓存\]](http://docs.aws.amazon.com/zh_cn/encryption-sdk/latest/developer-guide/images/encrypt-workflow-no-cache.png)


### 使用缓存加密数据
<a name="workflow-with-cache"></a>

获取加密材料并且缓存数据密钥：

1. 应用程序要求对数据 AWS Encryption SDK 进行加密。

   该请求指定了与底层加密材料管理器（CMM）关联的[缓存加密材料管理器（缓存 CMM）](#caching-cmm)。如果您指定主密钥提供程序或密钥环， AWS Encryption SDK 将创建默认的 CMM。

1. 该开发工具包要求指定的缓存 CMM 提供加密材料。

1. 缓存 CMM 从缓存中请求加密材料。

   1. 如果缓存找到匹配项，将更新匹配的缓存条目的期限和使用值，并将缓存的加密材料返回给缓存 CMM。

      如果缓存条目符合其[安全阈值](thresholds.md)，则缓存 CMM 将此条目返回到该开发工具包。否则，它通知缓存逐出该条目并继续操作，就好像没有匹配项一样。

   1. 如果缓存找不到有效的匹配项，缓存 CMM 将请求其基础 CMM 生成新的数据密钥。

      底层 CMM 从其密钥环（C 和 JavaScript）或主密钥提供程序（Java 和 Python）获取加密材料。这可能涉及调用一个服务，如 AWS Key Management Service。基础 CMM 将数据密钥的明文和加密副本返回给缓存 CMM。

      缓存 CMM 在缓存中保存新的加密材料。

1. 缓存 CMM 将加密材料返回给 AWS Encryption SDK。

1.  AWS Encryption SDK 使用纯文本数据密钥对数据进行加密。它在返回给用户的[加密消息](concepts.md#message)中存储加密数据和加密数据密钥。

![\[加密数据并缓存数据密钥\]](http://docs.aws.amazon.com/zh_cn/encryption-sdk/latest/developer-guide/images/encrypt-workflow-with-cache.png)


## 创建加密材料缓存
<a name="simplecache"></a>

 AWS Encryption SDK 定义了数据密钥缓存中使用的加密材料缓存的要求。此外，缓存 CMM 还提供本地缓存，即可在内存中配置的[最近最少使用（LRU）缓存](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_Recently_Used_.28LRU.29)。要创建本地缓存的实例，请使用 Java 和 Python 中的`LocalCryptoMaterialsCache`构造函数 JavaScript、中的 getLocalCryptographicMaterialsCache 函数或 C 中的`aws_cryptosdk_materials_cache_local_new`构造函数

本地缓存包含基本缓存管理逻辑，包括添加、驱逐和匹配缓存的条目以及维护缓存。您无需编写任何自定义缓存管理逻辑。您可以按原样使用本地缓存，对其进行自定义或替换任何兼容的缓存。

在创建本地缓存时，您可以设置其*容量*，即，该缓存可以保存的最大条目数。该设置有助于设计具有有限数据密钥重用次数的高效缓存。

 AWS Encryption SDK for Java 和 AWS Encryption SDK for Python 还提供了一个*空的加密材料缓存* (NullCryptoMaterialsCache)。 NullCryptoMaterialsCache 返回所有`GET`操作的失误，并且不对`PUT`操作做出响应。可以在测试 NullCryptoMaterialsCache 中使用，也可以在包含缓存代码的应用程序中暂时禁用缓存。

在中 AWS Encryption SDK，每个加密材料缓存都与缓存[加密材料管理器（缓存](#caching-cmm) CMM）相关联。缓存 CMM 从缓存中获取数据密钥，将数据密钥放在缓存中，然后强制执行您设置的[安全阈值](thresholds.md)。在创建缓存 CMM 时，您可以指定其使用的缓存以及生成其缓存的数据密钥的基础 CMM 或主密钥提供程序。

## 创建缓存加密材料管理器
<a name="caching-cmm"></a>

要启用数据密钥缓存，您需要创建[缓存](#simplecache)和*缓存加密材料管理器*（缓存 CMM）。然后，在加密或解密数据的请求中指定缓存 CMM，而不是标准[加密材料管理器（CMM）](concepts.md#crypt-materials-manager)或[主密钥提供程序](concepts.md#master-key-provider)或[密钥环](concepts.md#keyring)。

有两种类型 CMMs。它们均获取数据密钥（和相关的加密材料），但使用不同的方法，如下所示：
+ CMM 与密钥环（C 或 JavaScript）或主密钥提供程序（Java 和 Python）相关联。当开发工具包要求 CMM 提供加密或解密材料时，CMM 从其密钥环或主密钥提供程序获取材料。在 Java 和 Python 中，CMM 使用主密钥生成、加密或解密数据密钥。在 C 和中 JavaScript，密钥环生成、加密和返回加密材料。
+ 缓存 CMM 与一个缓存（例如[本地缓存](#simplecache)）和基础 CMM 相关联。在该开发工具包请求缓存 CMM 提供加密材料时，缓存 CMM 尝试从缓存中获取这些材料。如果找不到匹配项，缓存 CMM 将请求其基础 CMM 提供材料。然后，它在向调用方返回新的加密材料之前将其缓存。

缓存 CMM 还强制执行您为每个缓存条目设置的[安全阈值](thresholds.md)。由于安全阈值是在缓存 CMM 中设置并由其强制执行的，所以您可以使用任何兼容的缓存，即使该缓存不是为敏感材料设计的。

## 在数据密钥缓存条目中包含哪些内容？
<a name="cache-entries"></a>

数据密钥缓存将数据密钥和相关的加密材料存储在缓存中。每个条目包含下面列出的元素。在决定是否使用数据密钥缓存功能以及在缓存加密材料管理器（缓存 CMM）上设置安全阈值时，您可能会发现该信息非常有用。

**为加密请求缓存的条目**  
由于加密操作而添加到数据密钥缓存的条目包括以下元素：
+ 明文数据密钥
+ 加密的数据密钥（一个或多个）
+ [加密上下文](#caching-encryption-context) 
+ 消息签名密钥（如果使用）
+ [算法套件](concepts.md#crypto-algorithm)
+ 元数据，包括用于实施安全阈值的使用计数器

**为解密请求缓存的条目**  
由于解密操作而添加到数据密钥缓存的条目包括以下元素：
+ 明文数据密钥
+ 签名验证密钥（如果使用）
+ 元数据，包括用于实施安全阈值的使用计数器

## 加密上下文：如何选择缓存条目
<a name="caching-encryption-context"></a>

您可以在任何加密数据的请求中指定加密上下文。不过，加密上下文在数据密钥缓存中起到特殊的作用。该上下文允许在您的缓存中创建数据密钥子组，即使数据密钥来自于相同的缓存 CMM。

[加密上下文](concepts.md#encryption-context)是一组包含任意非机密数据的键值对。在加密期间，加密上下文以加密方式绑定到加密的数据，以便需要使用相同的加密上下文解密数据。在中 AWS Encryption SDK，加密上下文存储在带有[加密数据和数据密钥的加密消息](concepts.md#message)中。

在使用数据密钥缓存时，您还可以使用加密上下文为加密操作选择特定的缓存数据密钥。加密上下文与数据密钥一起保存在缓存条目中（它是缓存条目 ID 的一部分）。只有在加密上下文匹配时，才会重用缓存的数据密钥。如果要在加密请求中重用某些数据密钥，请指定相同的加密上下文。如果要避免使用这些数据密钥，请指定不同的加密上下文。

加密上下文始终是可选的，但建议使用。如果在请求中未指定加密上下文，则在缓存条目标识符中包含空加密上下文并与每个请求匹配。

## 我的应用程序是否使用缓存的数据密钥？
<a name="caching-effect"></a>

数据密钥缓存是一种优化策略，对于某些应用程序和工作负载是非常有效的。不过，由于它存在一些风险，请务必确定它对您的环境可能有多有效，然后确定收益是否大于风险。

由于数据密钥缓存重复使用数据密钥，因此，最明显的效果是减少了生成新数据密钥的调用次数。当实现数据密钥缓存时，仅在缓存丢失时才 AWS Encryption SDK 调用该 AWS KMS `GenerateDataKey`操作来创建初始数据密钥。但是，仅在生成大量具有相同特性（包括相同加密上下文和算法套件）的数据密钥的应用程序中，缓存才会显著提高性能。

要确定您的实现是否实际上 AWS Encryption SDK 是在使用缓存中的数据密钥，请尝试以下技术。
+ 在主密钥基础设施的日志中，检查创建新数据密钥的调用频率。在数据密钥缓存有效时，创建新密钥的调用次数应明显减少。例如，如果您使用的是 AWS KMS 主密钥提供程序或密钥环，请在 CloudTrail 日志中搜索[GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html)呼叫。
+ 比较 AWS Encryption SDK 为响应不同的加密请求而返回的[加密消息](concepts.md#message)。例如，如果您使用的是 AWS Encryption SDK for Java，请比较来自不同加密调用的[ParsedCiphertext](https://aws.github.io/aws-encryption-sdk-java/com/amazonaws/encryptionsdk/ParsedCiphertext.html)对象。在中 AWS Encryption SDK for JavaScript，比较`encryptedDataKeys`属性的内容[MessageHeader](https://github.com/aws/aws-encryption-sdk-javascript/blob/master/modules/serialize/src/types.ts#L21)。在重复使用数据密钥时，加密消息中的加密数据密钥是完全相同的。

# 数据密钥缓存示例
<a name="sample-cache-example"></a>

该示例将[数据密钥缓存](data-key-caching.md)与[本地缓存](data-caching-details.md#simplecache)一起使用以加快应用程序速度，其中，加密多个设备生成的数据并存储在不同的区域中。

在这种情况下，多个数据生成器生成数据，对其进行加密，然后写入到每个区域中的 [Kinesis 流](https://aws.amazon.com/kinesis/streams/)。[AWS Lambda](https://aws.amazon.com/lambda/) 函数（消费端）对流进行解密，然后将明文数据写入到区域中的 DynamoDB 表。数据生成器和消费端使用 AWS Encryption SDK 和 [AWS KMS 密钥提供程序](concepts.md#master-key-provider)。要减少对 KMS 的调用，每个生成器和消费端具有自己的本地缓存。

您可以在 [Java and Python](sample-cache-example-code.md) 中找到这些示例的源代码。该示例还包括一个定义样本资源的 CloudFormation 模板。

![\[此图表显示了数据生产者和使用者如何使用 Amazon Kinesis Data Streams 和 Amazon DynamoDB。 AWS KMS\]](http://docs.aws.amazon.com/zh_cn/encryption-sdk/latest/developer-guide/images/simplecache-example.png)


## 本地缓存结果
<a name="caching-example-impact"></a>

下表演示了本地缓存将该示例中的总 KMS 调用次数（每个区域每秒）减少到原始值的 1%。


**生成器请求**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/encryption-sdk/latest/developer-guide/sample-cache-example.html)


**使用者请求**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/encryption-sdk/latest/developer-guide/sample-cache-example.html)

# 数据密钥缓存示例代码
<a name="sample-cache-example-code"></a>

该代码示例在 Java 和 Python 中创建使用[本地缓存](data-caching-details.md#simplecache)的简易数据密钥缓存实施。该代码创建了两个本地缓存实例：一个用于加密[数据的数据生产](#caching-producer)者，另一个用于解密[数据的数据使用者](#caching-consumer)（AWS Lambda 函数）。有关每种语言的数据密钥缓存实施的详细信息，请参阅适用于 AWS Encryption SDK的 [Javadoc](https://aws.github.io/aws-encryption-sdk-java/) 和 [Python 文档](https://aws-encryption-sdk-python.readthedocs.io/en/latest/)。

数据密钥缓存适用于 AWS Encryption SDK 支持的所有[编程语言](programming-languages.md)。

有关在中使用数据密钥缓存的完整且经过测试的示例 AWS Encryption SDK，请参阅：
+ C/C\$1\$1：[caching\$1cmm.cpp](https://github.com/aws/aws-encryption-sdk-c/blob/master/examples/caching_cmm.cpp) 
+ Java：[SimpleDataKeyCachingExample.](https://github.com/aws/aws-encryption-sdk-java/blob/master/src/examples/java/com/amazonaws/crypto/examples/v2/SimpleDataKeyCachingExample.java) java
+ JavaScript 浏览器：[caching\$1cmm.ts](https://github.com/aws/aws-encryption-sdk-javascript/blob/master/modules/example-browser/src/caching_cmm.ts) 
+ JavaScript Node.js: [caching\$1cmm.ts](https://github.com/aws/aws-encryption-sdk-javascript/blob/master/modules/example-node/src/caching_cmm.ts) 
+ Python：[data\$1key\$1caching\$1basic.py](https://github.com/aws/aws-encryption-sdk-python/blob/master/examples/src/legacy/data_key_caching_basic.py)

## Producer
<a name="caching-producer"></a>

制作者获取地图，将其转换为 JSON，使用对其进行加密，然后将密文记录推送到每个地图中的 Kinesi [s 流](https://aws.amazon.com/kinesis/streams/)。 AWS Encryption SDK AWS 区域

该代码定义了[缓存加密材料管理器](data-caching-details.md#caching-cmm)（缓存 CMM），并将其与[本地缓存](data-caching-details.md#simplecache)和相关 [AWS KMS 主密钥提供程序](concepts.md#master-key-provider)关联。缓存 CMM 缓存来自主密钥提供程序的数据密钥（和[相关的加密材料](data-caching-details.md#cache-entries)）。它还代表该开发工具包与缓存进行交互，并实施您设置的安全阈值。

由于对加密方法的调用指定缓存 CMM 而非常规[缓存加密材料管理器（CMM）](concepts.md#crypt-materials-manager)或主密钥提供程序，加密将使用数据密钥缓存。

------
#### [ Java ]

以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

```
/*
 * Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 *
 * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except
 * in compliance with the License. A copy of the License is located at
 *
 * http://aws.amazon.com/apache2.0
 *
 * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 * specific language governing permissions and limitations under the License.
 */
package com.amazonaws.crypto.examples.kinesisdatakeycaching;

import com.amazonaws.encryptionsdk.AwsCrypto;
import com.amazonaws.encryptionsdk.CommitmentPolicy;
import com.amazonaws.encryptionsdk.CryptoResult;
import com.amazonaws.encryptionsdk.MasterKeyProvider;
import com.amazonaws.encryptionsdk.caching.CachingCryptoMaterialsManager;
import com.amazonaws.encryptionsdk.caching.LocalCryptoMaterialsCache;
import com.amazonaws.encryptionsdk.kmssdkv2.KmsMasterKey;
import com.amazonaws.encryptionsdk.kmssdkv2.KmsMasterKeyProvider;
import com.amazonaws.encryptionsdk.multi.MultipleProviderFactory;
import com.amazonaws.util.json.Jackson;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kms.KmsClient;

/**
 * Pushes data to Kinesis Streams in multiple Regions.
 */
public class MultiRegionRecordPusher {

    private static final long MAX_ENTRY_AGE_MILLISECONDS = 300000;
    private static final long MAX_ENTRY_USES = 100;
    private static final int MAX_CACHE_ENTRIES = 100;
    private final String streamName_;
    private final ArrayList<KinesisClient> kinesisClients_;
    private final CachingCryptoMaterialsManager cachingMaterialsManager_;
    private final AwsCrypto crypto_;

    /**
     * Creates an instance of this object with Kinesis clients for all target Regions and a cached
     * key provider containing KMS master keys in all target Regions.
     */
    public MultiRegionRecordPusher(final Region[] regions, final String kmsAliasName,
        final String streamName) {
        streamName_ = streamName;
        crypto_ = AwsCrypto.builder()
            .withCommitmentPolicy(CommitmentPolicy.RequireEncryptRequireDecrypt)
            .build();
        kinesisClients_ = new ArrayList<>();

        AwsCredentialsProvider credentialsProvider = DefaultCredentialsProvider.builder().build();

        // Build KmsMasterKey and AmazonKinesisClient objects for each target region
        List<KmsMasterKey> masterKeys = new ArrayList<>();
        for (Region region : regions) {
            kinesisClients_.add(KinesisClient.builder()
                .credentialsProvider(credentialsProvider)
                .region(region)
                .build());

            KmsMasterKey regionMasterKey = KmsMasterKeyProvider.builder()
                .defaultRegion(region)
                .builderSupplier(() -> KmsClient.builder().credentialsProvider(credentialsProvider))
                .buildStrict(kmsAliasName)
                .getMasterKey(kmsAliasName);

            masterKeys.add(regionMasterKey);
        }

        // Collect KmsMasterKey objects into single provider and add cache
        MasterKeyProvider<?> masterKeyProvider = MultipleProviderFactory.buildMultiProvider(
            KmsMasterKey.class,
            masterKeys
        );

        cachingMaterialsManager_ = CachingCryptoMaterialsManager.newBuilder()
            .withMasterKeyProvider(masterKeyProvider)
            .withCache(new LocalCryptoMaterialsCache(MAX_CACHE_ENTRIES))
            .withMaxAge(MAX_ENTRY_AGE_MILLISECONDS, TimeUnit.MILLISECONDS)
            .withMessageUseLimit(MAX_ENTRY_USES)
            .build();
    }

    /**
     * JSON serializes and encrypts the received record data and pushes it to all target streams.
     */
    public void putRecord(final Map<Object, Object> data) {
        String partitionKey = UUID.randomUUID().toString();
        Map<String, String> encryptionContext = new HashMap<>();
        encryptionContext.put("stream", streamName_);

        // JSON serialize data
        String jsonData = Jackson.toJsonString(data);

        // Encrypt data
        CryptoResult<byte[], ?> result = crypto_.encryptData(
            cachingMaterialsManager_,
            jsonData.getBytes(),
            encryptionContext
        );
        byte[] encryptedData = result.getResult();

        // Put records to Kinesis stream in all Regions
        for (KinesisClient regionalKinesisClient : kinesisClients_) {
            regionalKinesisClient.putRecord(builder ->
                builder.streamName(streamName_)
                    .data(SdkBytes.fromByteArray(encryptedData))
                    .partitionKey(partitionKey));
        }
    }
}
```

------
#### [ Python ]

```
"""
Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 
Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except
in compliance with the License. A copy of the License is located at
 
https://aws.amazon.com/apache-2-0/
 
or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
import json
import uuid
 
from aws_encryption_sdk import EncryptionSDKClient, StrictAwsKmsMasterKeyProvider, CachingCryptoMaterialsManager, LocalCryptoMaterialsCache, CommitmentPolicy
from aws_encryption_sdk.key_providers.kms import KMSMasterKey
import boto3
 
 
class MultiRegionRecordPusher(object):
    """Pushes data to Kinesis Streams in multiple Regions."""
    CACHE_CAPACITY = 100
    MAX_ENTRY_AGE_SECONDS = 300.0
    MAX_ENTRY_MESSAGES_ENCRYPTED = 100
 
    def __init__(self, regions, kms_alias_name, stream_name):
        self._kinesis_clients = []
        self._stream_name = stream_name
 
        # Set up EncryptionSDKClient
        _client = EncryptionSDKClient(CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT)
 
        # Set up KMSMasterKeyProvider with cache
        _key_provider = StrictAwsKmsMasterKeyProvider(kms_alias_name)
 
        # Add MasterKey and Kinesis client for each Region
        for region in regions:
            self._kinesis_clients.append(boto3.client('kinesis', region_name=region))
            regional_master_key = KMSMasterKey(
                client=boto3.client('kms', region_name=region),
                key_id=kms_alias_name
            )
            _key_provider.add_master_key_provider(regional_master_key)
 
        cache = LocalCryptoMaterialsCache(capacity=self.CACHE_CAPACITY)
        self._materials_manager = CachingCryptoMaterialsManager(
            master_key_provider=_key_provider,
            cache=cache,
            max_age=self.MAX_ENTRY_AGE_SECONDS,
            max_messages_encrypted=self.MAX_ENTRY_MESSAGES_ENCRYPTED
        )
 
    def put_record(self, record_data):
        """JSON serializes and encrypts the received record data and pushes it to all target streams.
 
        :param dict record_data: Data to write to stream
        """
        # Kinesis partition key to randomize write load across stream shards
        partition_key = uuid.uuid4().hex
 
        encryption_context = {'stream': self._stream_name}
 
        # JSON serialize data
        json_data = json.dumps(record_data)
 
        # Encrypt data
        encrypted_data, _header = _client.encrypt(
            source=json_data,
            materials_manager=self._materials_manager,
            encryption_context=encryption_context
        )
 
        # Put records to Kinesis stream in all Regions
        for client in self._kinesis_clients:
            client.put_record(
                StreamName=self._stream_name,
                Data=encrypted_data,
                PartitionKey=partition_key
            )
```

------

## 使用者
<a name="caching-consumer"></a>

数据消费端是一个由 [Kinesis](https://aws.amazon.com/kinesis/) 事件触发的 [AWS Lambda](https://aws.amazon.com/lambda/) 函数。其解密并反序列化每个记录，并将明文记录写入到同一区域中的 [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) 表。

与生成器代码一样，消费端代码在对 Decrypt 方法的调用中使用缓存加密材料管理器（缓存 CMM）以启用数据密钥缓存。

Java 代码使用指定的在*严格模式下*构建主密钥提供程序 AWS KMS key。解密时无需使用严格模式，但这是[最佳实践](best-practices.md#strict-discovery-mode)。Python 代码使用*发现模式*，允许 AWS Encryption SDK 使用加密数据密钥的任何包装密钥对其进行解密。

------
#### [ Java ]

以下示例使用版本 2。 *的 x 个* AWS Encryption SDK for Java。版本 3。 *其中 x* AWS Encryption SDK for Java 已弃用数据密钥缓存 CMM。使用版本 3。 *x*，你也可以使用[AWS KMS 分层密钥环](use-hierarchical-keyring.md)，这是一种替代的加密材料缓存解决方案。

此代码创建了在严格模式下解密的主密钥提供程序。 AWS Encryption SDK 只能使用 AWS KMS keys 您指定的来解密您的消息。

```
/*
 * Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 *
 * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except
 * in compliance with the License. A copy of the License is located at
 *
 * http://aws.amazon.com/apache2.0
 *
 * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 * specific language governing permissions and limitations under the License.
 */
package com.amazonaws.crypto.examples.kinesisdatakeycaching;

import com.amazonaws.encryptionsdk.AwsCrypto;
import com.amazonaws.encryptionsdk.CommitmentPolicy;
import com.amazonaws.encryptionsdk.CryptoResult;
import com.amazonaws.encryptionsdk.caching.CachingCryptoMaterialsManager;
import com.amazonaws.encryptionsdk.caching.LocalCryptoMaterialsCache;
import com.amazonaws.encryptionsdk.kmssdkv2.KmsMasterKeyProvider;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent.KinesisEventRecord;
import com.amazonaws.util.BinaryUtils;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.concurrent.TimeUnit;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbTable;
import software.amazon.awssdk.enhanced.dynamodb.TableSchema;

/**
 * Decrypts all incoming Kinesis records and writes records to DynamoDB.
 */
public class LambdaDecryptAndWrite {

    private static final long MAX_ENTRY_AGE_MILLISECONDS = 600000;
    private static final int MAX_CACHE_ENTRIES = 100;
    private final CachingCryptoMaterialsManager cachingMaterialsManager_;
    private final AwsCrypto crypto_;
    private final DynamoDbTable<Item> table_;

    /**
     * Because the cache is used only for decryption, the code doesn't set the max bytes or max
     * message security thresholds that are enforced only on on data keys used for encryption.
     */
    public LambdaDecryptAndWrite() {
        String kmsKeyArn = System.getenv("CMK_ARN");
        cachingMaterialsManager_ = CachingCryptoMaterialsManager.newBuilder()
            .withMasterKeyProvider(KmsMasterKeyProvider.builder().buildStrict(kmsKeyArn))
            .withCache(new LocalCryptoMaterialsCache(MAX_CACHE_ENTRIES))
            .withMaxAge(MAX_ENTRY_AGE_MILLISECONDS, TimeUnit.MILLISECONDS)
            .build();

        crypto_ = AwsCrypto.builder()
            .withCommitmentPolicy(CommitmentPolicy.RequireEncryptRequireDecrypt)
            .build();

        String tableName = System.getenv("TABLE_NAME");
        DynamoDbEnhancedClient dynamodb = DynamoDbEnhancedClient.builder().build();
        table_ = dynamodb.table(tableName, TableSchema.fromClass(Item.class));
    }

    /**
     * @param event
     * @param context
     */
    public void handleRequest(KinesisEvent event, Context context)
        throws UnsupportedEncodingException {
        for (KinesisEventRecord record : event.getRecords()) {
            ByteBuffer ciphertextBuffer = record.getKinesis().getData();
            byte[] ciphertext = BinaryUtils.copyAllBytesFrom(ciphertextBuffer);

            // Decrypt and unpack record
            CryptoResult<byte[], ?> plaintextResult = crypto_.decryptData(cachingMaterialsManager_,
                ciphertext);

            // Verify the encryption context value
            String streamArn = record.getEventSourceARN();
            String streamName = streamArn.substring(streamArn.indexOf("/") + 1);
            if (!streamName.equals(plaintextResult.getEncryptionContext().get("stream"))) {
                throw new IllegalStateException("Wrong Encryption Context!");
            }

            // Write record to DynamoDB
            String jsonItem = new String(plaintextResult.getResult(), StandardCharsets.UTF_8);
            System.out.println(jsonItem);
            table_.putItem(Item.fromJSON(jsonItem));
        }
    }

    private static class Item {

        static Item fromJSON(String jsonText) {
            // Parse JSON and create new Item
            return new Item();
        }
    }
}
```

------
#### [ Python ]

此 Python 代码在发现模式下使用主密钥提供程序进行解密。该代码允许 AWS Encryption SDK 使用任何加密数据密钥的包装密钥对其进行解密。[最佳实践](best-practices.md#strict-discovery-mode)是使用严格模式，在这种模式下，您可以指定可用于解密的包装密钥。

```
"""
Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 
Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except
in compliance with the License. A copy of the License is located at
 
https://aws.amazon.com/apache-2-0/
 
or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
import base64
import json
import logging
import os
 
from aws_encryption_sdk import EncryptionSDKClient, DiscoveryAwsKmsMasterKeyProvider, CachingCryptoMaterialsManager, LocalCryptoMaterialsCache, CommitmentPolicy
import boto3
 
_LOGGER = logging.getLogger(__name__)
_is_setup = False
CACHE_CAPACITY = 100
MAX_ENTRY_AGE_SECONDS = 600.0
 
def setup():
    """Sets up clients that should persist across Lambda invocations."""
    global encryption_sdk_client
    encryption_sdk_client = EncryptionSDKClient(CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT)
 
    global materials_manager
    key_provider = DiscoveryAwsKmsMasterKeyProvider()
    cache = LocalCryptoMaterialsCache(capacity=CACHE_CAPACITY)
           
    #  Because the cache is used only for decryption, the code doesn't set
    #   the max bytes or max message security thresholds that are enforced
    #   only on on data keys used for encryption.
    materials_manager = CachingCryptoMaterialsManager(
        master_key_provider=key_provider,
        cache=cache,
        max_age=MAX_ENTRY_AGE_SECONDS
    )
    global table
    table_name = os.environ.get('TABLE_NAME')
    table = boto3.resource('dynamodb').Table(table_name)
    global _is_setup
    _is_setup = True
 
 
def lambda_handler(event, context):
    """Decrypts all incoming Kinesis records and writes records to DynamoDB."""
    _LOGGER.debug('New event:')
    _LOGGER.debug(event)
    if not _is_setup:
        setup()
    with table.batch_writer() as batch:
        for record in event.get('Records', []):
            # Record data base64-encoded by Kinesis
            ciphertext = base64.b64decode(record['kinesis']['data'])
 
            # Decrypt and unpack record
            plaintext, header = encryption_sdk_client.decrypt(
                source=ciphertext,
                materials_manager=materials_manager
            )
            item = json.loads(plaintext)
 
            # Verify the encryption context value
            stream_name = record['eventSourceARN'].split('/', 1)[1]
            if stream_name != header.encryption_context['stream']:
                raise ValueError('Wrong Encryption Context!')
 
            # Write record to DynamoDB
            batch.put_item(Item=item)
```

------

# 数据密钥缓存示例： CloudFormation 模板
<a name="sample-cache-example-cloudformation"></a>

此 CloudFormation 模板设置了所有必要的 AWS 资源来重现[数据密钥缓存示例](sample-cache-example.md)。

------
#### [ JSON ]

```
{
    "Parameters": {
        "SourceCodeBucket": {
            "Type": "String",
            "Description": "S3 bucket containing Lambda source code zip files"
        },
        "PythonLambdaS3Key": {
            "Type": "String",
            "Description": "S3 key containing Python Lambda source code zip file"
        },
        "PythonLambdaObjectVersionId": {
            "Type": "String",
            "Description": "S3 version id for S3 key containing Python Lambda source code zip file"
        },
        "JavaLambdaS3Key": {
            "Type": "String",
            "Description": "S3 key containing Python Lambda source code zip file"
        },
        "JavaLambdaObjectVersionId": {
            "Type": "String",
            "Description": "S3 version id for S3 key containing Python Lambda source code zip file"
        },
        "KeyAliasSuffix": {
            "Type": "String",
            "Description": "Suffix to use for KMS key Alias (ie: alias/KeyAliasSuffix)"
        },
        "StreamName": {
            "Type": "String",
            "Description": "Name to use for Kinesis Stream"
        }
    },
    "Resources": {
        "InputStream": {
            "Type": "AWS::Kinesis::Stream",
            "Properties": {
                "Name": {
                    "Ref": "StreamName"
                },
                "ShardCount": 2
            }
        },
        "PythonLambdaOutputTable": {
            "Type": "AWS::DynamoDB::Table",
            "Properties": {
                "AttributeDefinitions": [
                    {
                        "AttributeName": "id",
                        "AttributeType": "S"
                    }
                ],
                "KeySchema": [
                    {
                        "AttributeName": "id",
                        "KeyType": "HASH"
                    }
                ],
                "ProvisionedThroughput": {
                    "ReadCapacityUnits": 1,
                    "WriteCapacityUnits": 1
                }
            }
        },
        "PythonLambdaRole": {
            "Type": "AWS::IAM::Role",
            "Properties": {
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17",		 	 	 
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Service": "lambda.amazonaws.com"
                            },
                            "Action": "sts:AssumeRole"
                        }
                    ]
                },
                "ManagedPolicyArns": [
                    "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
                ],
                "Policies": [
                    {
                        "PolicyName": "PythonLambdaAccess",
                        "PolicyDocument": {
                            "Version": "2012-10-17",		 	 	 
                            "Statement": [
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "dynamodb:DescribeTable",
                                        "dynamodb:BatchWriteItem"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${PythonLambdaOutputTable}"
                                    }
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "dynamodb:PutItem"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${PythonLambdaOutputTable}*"
                                    }
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "kinesis:GetRecords",
                                        "kinesis:GetShardIterator",
                                        "kinesis:DescribeStream",
                                        "kinesis:ListStreams"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        },
        "PythonLambdaFunction": {
            "Type": "AWS::Lambda::Function",
            "Properties": {
                "Description": "Python consumer",
                "Runtime": "python2.7",
                "MemorySize": 512,
                "Timeout": 90,
                "Role": {
                    "Fn::GetAtt": [
                        "PythonLambdaRole",
                        "Arn"
                    ]
                },
                "Handler": "aws_crypto_examples.kinesis_datakey_caching.consumer.lambda_handler",
                "Code": {
                    "S3Bucket": {
                        "Ref": "SourceCodeBucket"
                    },
                    "S3Key": {
                        "Ref": "PythonLambdaS3Key"
                    },
                    "S3ObjectVersion": {
                        "Ref": "PythonLambdaObjectVersionId"
                    }
                },
                "Environment": {
                    "Variables": {
                        "TABLE_NAME": {
                            "Ref": "PythonLambdaOutputTable"
                        }
                    }
                }
            }
        },
        "PythonLambdaSourceMapping": {
            "Type": "AWS::Lambda::EventSourceMapping",
            "Properties": {
                "BatchSize": 1,
                "Enabled": true,
                "EventSourceArn": {
                    "Fn::Sub": "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}"
                },
                "FunctionName": {
                    "Ref": "PythonLambdaFunction"
                },
                "StartingPosition": "TRIM_HORIZON"
            }
        },
        "JavaLambdaOutputTable": {
            "Type": "AWS::DynamoDB::Table",
            "Properties": {
                "AttributeDefinitions": [
                    {
                        "AttributeName": "id",
                        "AttributeType": "S"
                    }
                ],
                "KeySchema": [
                    {
                        "AttributeName": "id",
                        "KeyType": "HASH"
                    }
                ],
                "ProvisionedThroughput": {
                    "ReadCapacityUnits": 1,
                    "WriteCapacityUnits": 1
                }
            }
        },
        "JavaLambdaRole": {
            "Type": "AWS::IAM::Role",
            "Properties": {
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17",		 	 	 
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Service": "lambda.amazonaws.com"
                            },
                            "Action": "sts:AssumeRole"
                        }
                    ]
                },
                "ManagedPolicyArns": [
                    "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
                ],
                "Policies": [
                    {
                        "PolicyName": "JavaLambdaAccess",
                        "PolicyDocument": {
                            "Version": "2012-10-17",		 	 	 
                            "Statement": [
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "dynamodb:DescribeTable",
                                        "dynamodb:BatchWriteItem"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${JavaLambdaOutputTable}"
                                    }
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "dynamodb:PutItem"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${JavaLambdaOutputTable}*"
                                    }
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "kinesis:GetRecords",
                                        "kinesis:GetShardIterator",
                                        "kinesis:DescribeStream",
                                        "kinesis:ListStreams"
                                    ],
                                    "Resource": {
                                        "Fn::Sub": "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        },
        "JavaLambdaFunction": {
            "Type": "AWS::Lambda::Function",
            "Properties": {
                "Description": "Java consumer",
                "Runtime": "java8",
                "MemorySize": 512,
                "Timeout": 90,
                "Role": {
                    "Fn::GetAtt": [
                        "JavaLambdaRole",
                        "Arn"
                    ]
                },
                "Handler": "com.amazonaws.crypto.examples.kinesisdatakeycaching.LambdaDecryptAndWrite::handleRequest",
                "Code": {
                    "S3Bucket": {
                        "Ref": "SourceCodeBucket"
                    },
                    "S3Key": {
                        "Ref": "JavaLambdaS3Key"
                    },
                    "S3ObjectVersion": {
                        "Ref": "JavaLambdaObjectVersionId"
                    }
                },
                "Environment": {
                    "Variables": {
                        "TABLE_NAME": {
                            "Ref": "JavaLambdaOutputTable"
                        },
                        "CMK_ARN": {
                            "Fn::GetAtt": [
                                "RegionKinesisCMK",
                                "Arn"
                            ]
                        }
                    }
                }
            }
        },
        "JavaLambdaSourceMapping": {
            "Type": "AWS::Lambda::EventSourceMapping",
            "Properties": {
                "BatchSize": 1,
                "Enabled": true,
                "EventSourceArn": {
                    "Fn::Sub": "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}"
                },
                "FunctionName": {
                    "Ref": "JavaLambdaFunction"
                },
                "StartingPosition": "TRIM_HORIZON"
            }
        },
        "RegionKinesisCMK": {
            "Type": "AWS::KMS::Key",
            "Properties": {
                "Description": "Used to encrypt data passing through Kinesis Stream in this region",
                "Enabled": true,
                "KeyPolicy": {
                    "Version": "2012-10-17",		 	 	 
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "AWS": {
                                    "Fn::Sub": "arn:aws:iam::${AWS::AccountId}:root"
                                }
                            },
                            "Action": [
                                "kms:Encrypt",
                                "kms:GenerateDataKey",
                                "kms:CreateAlias",
                                "kms:DeleteAlias",
                                "kms:DescribeKey",
                                "kms:DisableKey",
                                "kms:EnableKey",
                                "kms:PutKeyPolicy",
                                "kms:ScheduleKeyDeletion",
                                "kms:UpdateAlias",
                                "kms:UpdateKeyDescription"
                            ],
                            "Resource": "*"
                        },
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "AWS": [
                                    {
                                        "Fn::GetAtt": [
                                            "PythonLambdaRole",
                                            "Arn"
                                        ]
                                    },
                                    {
                                        "Fn::GetAtt": [
                                            "JavaLambdaRole",
                                            "Arn"
                                        ]
                                    }
                                ]
                            },
                            "Action": "kms:Decrypt",
                            "Resource": "*"
                        }
                    ]
                }
            }
        },
        "RegionKinesisCMKAlias": {
            "Type": "AWS::KMS::Alias",
            "Properties": {
                "AliasName": {
                    "Fn::Sub": "alias/${KeyAliasSuffix}"
                },
                "TargetKeyId": {
                    "Ref": "RegionKinesisCMK"
                }
            }
        }
    }
}
```

------
#### [ YAML ]

```
Parameters:
    SourceCodeBucket:
        Type: String
        Description: S3 bucket containing Lambda source code zip files
    PythonLambdaS3Key:
        Type: String
        Description: S3 key containing Python Lambda source code zip file
    PythonLambdaObjectVersionId:
        Type: String
        Description: S3 version id for S3 key containing Python Lambda source code zip file
    JavaLambdaS3Key:
        Type: String
        Description: S3 key containing Python Lambda source code zip file
    JavaLambdaObjectVersionId:
        Type: String
        Description: S3 version id for S3 key containing Python Lambda source code zip file
    KeyAliasSuffix:
        Type: String
        Description: 'Suffix to use for KMS CMK Alias (ie: alias/<KeyAliasSuffix>)'
    StreamName:
        Type: String
        Description: Name to use for Kinesis Stream
Resources:
    InputStream:
        Type: AWS::Kinesis::Stream
        Properties:
            Name: !Ref StreamName
            ShardCount: 2
    PythonLambdaOutputTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                -
                    AttributeName: id
                    AttributeType: S
            KeySchema:
                -
                    AttributeName: id
                    KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
    PythonLambdaRole:
        Type: AWS::IAM::Role
        Properties:
            AssumeRolePolicyDocument:
                Version: 2012-10-17
                Statement:
                    -
                        Effect: Allow
                        Principal:
                            Service: lambda.amazonaws.com
                        Action: sts:AssumeRole
            ManagedPolicyArns:
                - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
            Policies:
                -
                    PolicyName: PythonLambdaAccess
                    PolicyDocument:
                        Version: 2012-10-17
                        Statement:
                            -
                                Effect: Allow
                                Action:
                                    - dynamodb:DescribeTable
                                    - dynamodb:BatchWriteItem
                                Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${PythonLambdaOutputTable}
                            -
                                Effect: Allow
                                Action:
                                    - dynamodb:PutItem
                                Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${PythonLambdaOutputTable}*
                            -
                                Effect: Allow
                                Action:
                                    - kinesis:GetRecords
                                    - kinesis:GetShardIterator
                                    - kinesis:DescribeStream
                                    - kinesis:ListStreams
                                Resource: !Sub arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}
    PythonLambdaFunction:
        Type: AWS::Lambda::Function
        Properties:
            Description: Python consumer
            Runtime: python2.7
            MemorySize: 512
            Timeout: 90
            Role: !GetAtt PythonLambdaRole.Arn
            Handler: aws_crypto_examples.kinesis_datakey_caching.consumer.lambda_handler
            Code:
                S3Bucket: !Ref SourceCodeBucket
                S3Key: !Ref PythonLambdaS3Key
                S3ObjectVersion: !Ref PythonLambdaObjectVersionId
            Environment:
                Variables:
                    TABLE_NAME: !Ref PythonLambdaOutputTable
    PythonLambdaSourceMapping:
        Type: AWS::Lambda::EventSourceMapping
        Properties:
            BatchSize: 1
            Enabled: true
            EventSourceArn: !Sub arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}
            FunctionName: !Ref PythonLambdaFunction
            StartingPosition: TRIM_HORIZON
    JavaLambdaOutputTable:
        Type: AWS::DynamoDB::Table
        Properties:
            AttributeDefinitions:
                -
                    AttributeName: id
                    AttributeType: S
            KeySchema:
                -
                    AttributeName: id
                    KeyType: HASH
            ProvisionedThroughput:
                ReadCapacityUnits: 1
                WriteCapacityUnits: 1
    JavaLambdaRole:
        Type: AWS::IAM::Role
        Properties:
            AssumeRolePolicyDocument:
                Version: 2012-10-17
                Statement:
                    -
                        Effect: Allow
                        Principal:
                            Service: lambda.amazonaws.com
                        Action: sts:AssumeRole
            ManagedPolicyArns:
                - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
            Policies:
                -
                    PolicyName: JavaLambdaAccess
                    PolicyDocument:
                        Version: 2012-10-17
                        Statement:
                            -
                                Effect: Allow
                                Action:
                                    - dynamodb:DescribeTable
                                    - dynamodb:BatchWriteItem
                                Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${JavaLambdaOutputTable}
                            -
                                Effect: Allow
                                Action:
                                    - dynamodb:PutItem
                                Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${JavaLambdaOutputTable}*
                            -
                                Effect: Allow
                                Action:
                                    - kinesis:GetRecords
                                    - kinesis:GetShardIterator
                                    - kinesis:DescribeStream
                                    - kinesis:ListStreams
                                Resource: !Sub arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}
    JavaLambdaFunction:
        Type: AWS::Lambda::Function
        Properties:
            Description: Java consumer
            Runtime: java8
            MemorySize: 512
            Timeout: 90
            Role: !GetAtt JavaLambdaRole.Arn
            Handler: com.amazonaws.crypto.examples.kinesisdatakeycaching.LambdaDecryptAndWrite::handleRequest
            Code:
                S3Bucket: !Ref SourceCodeBucket
                S3Key: !Ref JavaLambdaS3Key
                S3ObjectVersion: !Ref JavaLambdaObjectVersionId
            Environment:
                Variables:
                    TABLE_NAME: !Ref JavaLambdaOutputTable
                    CMK_ARN: !GetAtt RegionKinesisCMK.Arn
    JavaLambdaSourceMapping:
        Type: AWS::Lambda::EventSourceMapping
        Properties:
            BatchSize: 1
            Enabled: true
            EventSourceArn: !Sub arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${InputStream}
            FunctionName: !Ref JavaLambdaFunction
            StartingPosition: TRIM_HORIZON
    RegionKinesisCMK:
        Type: AWS::KMS::Key
        Properties:
            Description: Used to encrypt data passing through Kinesis Stream in this region
            Enabled: true
            KeyPolicy:
                Version: 2012-10-17
                Statement:
                    -
                        Effect: Allow
                        Principal:
                            AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
                        Action:
                            # Data plane actions
                            - kms:Encrypt
                            - kms:GenerateDataKey
                            # Control plane actions
                            - kms:CreateAlias
                            - kms:DeleteAlias
                            - kms:DescribeKey
                            - kms:DisableKey
                            - kms:EnableKey
                            - kms:PutKeyPolicy
                            - kms:ScheduleKeyDeletion
                            - kms:UpdateAlias
                            - kms:UpdateKeyDescription
                        Resource: '*'
                    -
                        Effect: Allow
                        Principal:
                            AWS:
                                - !GetAtt PythonLambdaRole.Arn
                                - !GetAtt JavaLambdaRole.Arn
                        Action: kms:Decrypt
                        Resource: '*'
    RegionKinesisCMKAlias:
        Type: AWS::KMS::Alias
        Properties:
            AliasName: !Sub alias/${KeyAliasSuffix}
            TargetKeyId: !Ref RegionKinesisCMK
```

------