Amazon DynamoDB Construct Library
---AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see the Migrating to AWS CDK v2 guide.
Here is a minimal deployable DynamoDB table definition:
table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING)
)
Importing existing tables
To import an existing table into your CDK application, use the Table.fromTableName
, Table.fromTableArn
or Table.fromTableAttributes
factory method. This method accepts table name or table ARN which describes the properties of an already
existing table:
# user: iam.User
table = dynamodb.Table.from_table_arn(self, "ImportedTable", "arn:aws:dynamodb:us-east-1:111111111:table/my-table")
# now you can just call methods on the table
table.grant_read_write_data(user)
If you intend to use the tableStreamArn
(including indirectly, for example by creating an
@aws-cdk/aws-lambda-event-source.DynamoEventSource
on the imported table), you must use the
Table.fromTableAttributes
method and the tableStreamArn
property must be populated.
Keys
When a table is defined, you must define it’s schema using the partitionKey
(required) and sortKey
(optional) properties.
Billing Mode
DynamoDB supports two billing modes:
PROVISIONED - the default mode where the table and global secondary indexes have configured read and write capacity.
PAY_PER_REQUEST - on-demand pricing and scaling. You only pay for what you use and there is no read and write capacity for the table or its global secondary indexes.
table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST
)
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.
Table Class
DynamoDB supports two table classes:
STANDARD - the default mode, and is recommended for the vast majority of workloads.
STANDARD_INFREQUENT_ACCESS - optimized for tables where storage is the dominant cost.
table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
table_class=dynamodb.TableClass.STANDARD_INFREQUENT_ACCESS
)
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.TableClasses.html
Configure AutoScaling for your table
You can have DynamoDB automatically raise and lower the read and write capacities of your table by setting up autoscaling. You can use this to either keep your tables at a desired utilization level, or by scaling up and down at pre-configured times of the day:
Auto-scaling is only relevant for tables with the billing mode, PROVISIONED.
read_scaling = table.auto_scale_read_capacity(min_capacity=1, max_capacity=50)
read_scaling.scale_on_utilization(
target_utilization_percent=50
)
read_scaling.scale_on_schedule("ScaleUpInTheMorning",
schedule=appscaling.Schedule.cron(hour="8", minute="0"),
min_capacity=20
)
read_scaling.scale_on_schedule("ScaleDownAtNight",
schedule=appscaling.Schedule.cron(hour="20", minute="0"),
max_capacity=20
)
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html https://aws.amazon.com/blogs/database/how-to-use-aws-cloudformation-to-configure-auto-scaling-for-amazon-dynamodb-tables-and-indexes/
Amazon DynamoDB Global Tables
You can create DynamoDB Global Tables by setting the replicationRegions
property on a Table
:
global_table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
replication_regions=["us-east-1", "us-east-2", "us-west-2"]
)
When doing so, a CloudFormation Custom Resource will be added to the stack in order to create the replica tables in the selected regions.
The default billing mode for Global Tables is PAY_PER_REQUEST
.
If you want to use PROVISIONED
,
you have to make sure write auto-scaling is enabled for that Table:
global_table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
replication_regions=["us-east-1", "us-east-2", "us-west-2"],
billing_mode=dynamodb.BillingMode.PROVISIONED
)
global_table.auto_scale_write_capacity(
min_capacity=1,
max_capacity=10
).scale_on_utilization(target_utilization_percent=75)
When adding a replica region for a large table, you might want to increase the timeout for the replication operation:
global_table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
replication_regions=["us-east-1", "us-east-2", "us-west-2"],
replication_timeout=Duration.hours(2)
)
Encryption
All user data stored in Amazon DynamoDB is fully encrypted at rest. When creating a new table, you can choose to encrypt using the following customer master keys (CMK) to encrypt your table:
AWS owned CMK - By default, all tables are encrypted under an AWS owned customer master key (CMK) in the DynamoDB service account (no additional charges apply).
AWS managed CMK - AWS KMS keys (one per region) are created in your account, managed, and used on your behalf by AWS DynamoDB (AWS KMS charges apply).
Customer managed CMK - You have full control over the KMS key used to encrypt the DynamoDB Table (AWS KMS charges apply).
Creating a Table encrypted with a customer managed CMK:
table = dynamodb.Table(self, "MyTable",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=dynamodb.TableEncryption.CUSTOMER_MANAGED
)
# You can access the CMK that was added to the stack on your behalf by the Table construct via:
table_encryption_key = table.encryption_key
You can also supply your own key:
import aws_cdk.aws_kms as kms
encryption_key = kms.Key(self, "Key",
enable_key_rotation=True
)
table = dynamodb.Table(self, "MyTable",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=dynamodb.TableEncryption.CUSTOMER_MANAGED,
encryption_key=encryption_key
)
In order to use the AWS managed CMK instead, change the code to:
table = dynamodb.Table(self, "MyTable",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
encryption=dynamodb.TableEncryption.AWS_MANAGED
)
Get schema of table or secondary indexes
To get the partition key and sort key of the table or indexes you have configured:
# table: dynamodb.Table
schema = table.schema()
partition_key = schema.partition_key
sort_key = schema.sort_key
Kinesis Stream
A Kinesis Data Stream can be configured on the DynamoDB table to capture item-level changes.
import aws_cdk.aws_kinesis as kinesis
stream = kinesis.Stream(self, "Stream")
table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING),
kinesis_stream=stream
)