AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Interface for accessing DynamoDB
Amazon DynamoDBAmazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Namespace: Amazon.DynamoDBv2
Assembly: AWSSDK.DynamoDBv2.dll
Version: 3.x.y.z
public interface IAmazonDynamoDB IAmazonService, IDisposable
The IAmazonDynamoDB type exposes the following members
Name | Type | Description | |
---|---|---|---|
Paginators | Amazon.DynamoDBv2.Model.IDynamoDBv2PaginatorFactory |
Paginators for the service |
Name | Description | |
---|---|---|
BatchExecuteStatement(BatchExecuteStatementRequest) |
This operation allows you to perform batch reads or writes on data stored in DynamoDB,
using PartiQL. Each read statement in a
The entire batch must consist of either read statements or write statements, you cannot
mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement
succeeded. Error details for individual statements can be found under the Error
field of the |
|
BatchExecuteStatementAsync(BatchExecuteStatementRequest, CancellationToken) |
This operation allows you to perform batch reads or writes on data stored in DynamoDB,
using PartiQL. Each read statement in a
The entire batch must consist of either read statements or write statements, you cannot
mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement
succeeded. Error details for individual statements can be found under the Error
field of the |
|
BatchGetItem(Dictionary<String, KeysAndAttributes>, ReturnConsumedCapacity) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchGetItem(Dictionary<String, KeysAndAttributes>) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchGetItem(BatchGetItemRequest) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchGetItemAsync(Dictionary<String, KeysAndAttributes>, ReturnConsumedCapacity, CancellationToken) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchGetItemAsync(Dictionary<String, KeysAndAttributes>, CancellationToken) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchGetItemAsync(BatchGetItemRequest, CancellationToken) |
The
A single operation can retrieve up to 16 MB of data, which can contain as many as
100 items.
If you request more than 100 items,
For example, if you ask to retrieve 100 items, but each individual item is 300 KB
in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also
returns an appropriate
If none of the items can be processed due to insufficient provisioned throughput
on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
In order to minimize response latency,
When designing your application, keep in mind that DynamoDB does not return items
in any particular order. To help parse the response by item, include the primary key
values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. |
|
BatchWriteItem(Dictionary<String, List<WriteRequest>>) |
The
The individual
For tables and indexes with provisioned capacity, if none of the items can be processed
due to insufficient provisioned throughput on all of the tables in the request, then
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With
If you use a programming language that supports concurrency, you can use threads to
write items in parallel. Your application must include the necessary logic to manage
the threads. With languages that don't support threading, you must update or delete
the specified items one at a time. In both situations, Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit. If one or more of the following is true, DynamoDB rejects the entire batch write operation:
|
|
BatchWriteItem(BatchWriteItemRequest) |
The
The individual
For tables and indexes with provisioned capacity, if none of the items can be processed
due to insufficient provisioned throughput on all of the tables in the request, then
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With
If you use a programming language that supports concurrency, you can use threads to
write items in parallel. Your application must include the necessary logic to manage
the threads. With languages that don't support threading, you must update or delete
the specified items one at a time. In both situations, Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit. If one or more of the following is true, DynamoDB rejects the entire batch write operation:
|
|
BatchWriteItemAsync(Dictionary<String, List<WriteRequest>>, CancellationToken) |
The
The individual
For tables and indexes with provisioned capacity, if none of the items can be processed
due to insufficient provisioned throughput on all of the tables in the request, then
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With
If you use a programming language that supports concurrency, you can use threads to
write items in parallel. Your application must include the necessary logic to manage
the threads. With languages that don't support threading, you must update or delete
the specified items one at a time. In both situations, Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit. If one or more of the following is true, DynamoDB rejects the entire batch write operation:
|
|
BatchWriteItemAsync(BatchWriteItemRequest, CancellationToken) |
The
The individual
For tables and indexes with provisioned capacity, if none of the items can be processed
due to insufficient provisioned throughput on all of the tables in the request, then
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With
If you use a programming language that supports concurrency, you can use threads to
write items in parallel. Your application must include the necessary logic to manage
the threads. With languages that don't support threading, you must update or delete
the specified items one at a time. In both situations, Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit. If one or more of the following is true, DynamoDB rejects the entire batch write operation:
|
|
CreateBackup(CreateBackupRequest) |
Creates a backup for an existing table. Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken. When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call All backups in DynamoDB work without consuming any provisioned throughput on the table. If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency. Along with data, the following are also included on the backups:
|
|
CreateBackupAsync(CreateBackupRequest, CancellationToken) |
Creates a backup for an existing table. Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken. When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call All backups in DynamoDB work without consuming any provisioned throughput on the table. If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency. Along with data, the following are also included on the backups:
|
|
CreateGlobalTable(CreateGlobalTableRequest) |
Creates a global table from an existing table. A global table creates a replication
relationship between two or more DynamoDB tables with the same table name in the provided
Regions.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
If global secondary indexes are specified, then the following conditions must also be met:
If local secondary indexes are specified, then the following conditions must also be met:
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes. If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table. |
|
CreateGlobalTableAsync(CreateGlobalTableRequest, CancellationToken) |
Creates a global table from an existing table. A global table creates a replication
relationship between two or more DynamoDB tables with the same table name in the provided
Regions.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
If global secondary indexes are specified, then the following conditions must also be met:
If local secondary indexes are specified, then the following conditions must also be met:
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes. If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table. |
|
CreateTable(string, List<KeySchemaElement>, List<AttributeDefinition>, ProvisionedThroughput) |
The
You can optionally define secondary indexes on the new table, as part of the
You can use the |
|
CreateTable(CreateTableRequest) |
The
You can optionally define secondary indexes on the new table, as part of the
You can use the |
|
CreateTableAsync(string, List<KeySchemaElement>, List<AttributeDefinition>, ProvisionedThroughput, CancellationToken) |
The
You can optionally define secondary indexes on the new table, as part of the
You can use the |
|
CreateTableAsync(CreateTableRequest, CancellationToken) |
The
You can optionally define secondary indexes on the new table, as part of the
You can use the |
|
DeleteBackup(DeleteBackupRequest) |
Deletes an existing backup of a table.
You can call |
|
DeleteBackupAsync(DeleteBackupRequest, CancellationToken) |
Deletes an existing backup of a table.
You can call |
|
DeleteItem(string, Dictionary<String, AttributeValue>) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteItem(string, Dictionary<String, AttributeValue>, ReturnValue) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteItem(DeleteItemRequest) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteItemAsync(string, Dictionary<String, AttributeValue>, CancellationToken) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteItemAsync(string, Dictionary<String, AttributeValue>, ReturnValue, CancellationToken) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteItemAsync(DeleteItemRequest, CancellationToken) |
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in
the same operation, using the
Unless you specify conditions, the Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted. |
|
DeleteResourcePolicy(DeleteResourcePolicyRequest) |
Deletes the resource-based policy attached to the resource, which can be a table or stream.
To make sure that you don't inadvertently lock yourself out of your own resources,
the root principal in your Amazon Web Services account can perform
|
|
DeleteResourcePolicyAsync(DeleteResourcePolicyRequest, CancellationToken) |
Deletes the resource-based policy attached to the resource, which can be a table or stream.
To make sure that you don't inadvertently lock yourself out of your own resources,
the root principal in your Amazon Web Services account can perform
|
|
DeleteTable(string) |
The
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
DynamoDB might continue to accept data read and write operations, such as When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on
that table goes into the
Use the |
|
DeleteTable(DeleteTableRequest) |
The
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
DynamoDB might continue to accept data read and write operations, such as When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on
that table goes into the
Use the |
|
DeleteTableAsync(string, CancellationToken) |
The
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
DynamoDB might continue to accept data read and write operations, such as When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on
that table goes into the
Use the |
|
DeleteTableAsync(DeleteTableRequest, CancellationToken) |
The
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
DynamoDB might continue to accept data read and write operations, such as When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on
that table goes into the
Use the |
|
DescribeBackup(DescribeBackupRequest) |
Describes an existing backup of a table.
You can call |
|
DescribeBackupAsync(DescribeBackupRequest, CancellationToken) |
Describes an existing backup of a table.
You can call |
|
DescribeContinuousBackups(DescribeContinuousBackupsRequest) |
Checks the status of continuous backups and point in time recovery on the specified
table. Continuous backups are
After continuous backups and point in time recovery are enabled, you can restore
to any point in time within
You can call |
|
DescribeContinuousBackupsAsync(DescribeContinuousBackupsRequest, CancellationToken) |
Checks the status of continuous backups and point in time recovery on the specified
table. Continuous backups are
After continuous backups and point in time recovery are enabled, you can restore
to any point in time within
You can call |
|
DescribeContributorInsights(DescribeContributorInsightsRequest) |
Returns information about contributor insights for a given table or global secondary index. |
|
DescribeContributorInsightsAsync(DescribeContributorInsightsRequest, CancellationToken) |
Returns information about contributor insights for a given table or global secondary index. |
|
DescribeEndpoints(DescribeEndpointsRequest) |
Returns the regional endpoint information. For more information on policy permissions, please see Internetwork traffic privacy. |
|
DescribeEndpointsAsync(DescribeEndpointsRequest, CancellationToken) |
Returns the regional endpoint information. For more information on policy permissions, please see Internetwork traffic privacy. |
|
DescribeExport(DescribeExportRequest) |
Describes an existing table export. |
|
DescribeExportAsync(DescribeExportRequest, CancellationToken) |
Describes an existing table export. |
|
DescribeGlobalTable(DescribeGlobalTableRequest) |
Returns information about the specified global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
DescribeGlobalTableAsync(DescribeGlobalTableRequest, CancellationToken) |
Returns information about the specified global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
DescribeGlobalTableSettings(DescribeGlobalTableSettingsRequest) |
Describes Region-specific settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
DescribeGlobalTableSettingsAsync(DescribeGlobalTableSettingsRequest, CancellationToken) |
Describes Region-specific settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
DescribeImport(DescribeImportRequest) |
Represents the properties of the import. |
|
DescribeImportAsync(DescribeImportRequest, CancellationToken) |
Represents the properties of the import. |
|
DescribeKinesisStreamingDestination(DescribeKinesisStreamingDestinationRequest) |
Returns information about the status of Kinesis streaming. |
|
DescribeKinesisStreamingDestinationAsync(DescribeKinesisStreamingDestinationRequest, CancellationToken) |
Returns information about the status of Kinesis streaming. |
|
DescribeLimits(DescribeLimitsRequest) |
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there. When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon
Web Services Support Center, obtaining the increase is not instantaneous. The
For example, you could use one of the Amazon Web Services SDKs to do the following:
This will let you see whether you are getting close to your account-level quotas. The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes. For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
The |
|
DescribeLimitsAsync(DescribeLimitsRequest, CancellationToken) |
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there. When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon
Web Services Support Center, obtaining the increase is not instantaneous. The
For example, you could use one of the Amazon Web Services SDKs to do the following:
This will let you see whether you are getting close to your account-level quotas. The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes. For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
The |
|
DescribeTable(string) |
Returns information about the table, including the current status of the table, when
it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
If you issue a |
|
DescribeTable(DescribeTableRequest) |
Returns information about the table, including the current status of the table, when
it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
If you issue a |
|
DescribeTableAsync(string, CancellationToken) |
Returns information about the table, including the current status of the table, when
it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
If you issue a |
|
DescribeTableAsync(DescribeTableRequest, CancellationToken) |
Returns information about the table, including the current status of the table, when
it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
If you issue a |
|
DescribeTableReplicaAutoScaling(DescribeTableReplicaAutoScalingRequest) |
Describes auto scaling settings across replicas of the global table at once.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
|
|
DescribeTableReplicaAutoScalingAsync(DescribeTableReplicaAutoScalingRequest, CancellationToken) |
Describes auto scaling settings across replicas of the global table at once.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
|
|
DescribeTimeToLive(string) |
Gives a description of the Time to Live (TTL) status on the specified table. |
|
DescribeTimeToLive(DescribeTimeToLiveRequest) |
Gives a description of the Time to Live (TTL) status on the specified table. |
|
DescribeTimeToLiveAsync(string, CancellationToken) |
Gives a description of the Time to Live (TTL) status on the specified table. |
|
DescribeTimeToLiveAsync(DescribeTimeToLiveRequest, CancellationToken) |
Gives a description of the Time to Live (TTL) status on the specified table. |
|
DetermineServiceOperationEndpoint(AmazonWebServiceRequest) |
Returns the endpoint that will be used for a particular request. |
|
DisableKinesisStreamingDestination(DisableKinesisStreamingDestinationRequest) |
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources. |
|
DisableKinesisStreamingDestinationAsync(DisableKinesisStreamingDestinationRequest, CancellationToken) |
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources. |
|
EnableKinesisStreamingDestination(EnableKinesisStreamingDestinationRequest) |
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE. |
|
EnableKinesisStreamingDestinationAsync(EnableKinesisStreamingDestinationRequest, CancellationToken) |
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE. |
|
ExecuteStatement(ExecuteStatementRequest) |
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (
A single |
|
ExecuteStatementAsync(ExecuteStatementRequest, CancellationToken) |
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (
A single |
|
ExecuteTransaction(ExecuteTransactionRequest) |
This operation allows you to perform transactional reads or writes on data stored
in DynamoDB, using PartiQL.
The entire transaction must consist of either read statements or write statements,
you cannot mix both in one transaction. The EXISTS function is an exception and can
be used to check the condition of specific attributes of the item in a similar manner
to |
|
ExecuteTransactionAsync(ExecuteTransactionRequest, CancellationToken) |
This operation allows you to perform transactional reads or writes on data stored
in DynamoDB, using PartiQL.
The entire transaction must consist of either read statements or write statements,
you cannot mix both in one transaction. The EXISTS function is an exception and can
be used to check the condition of specific attributes of the item in a similar manner
to |
|
ExportTableToPointInTime(ExportTableToPointInTimeRequest) |
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window. |
|
ExportTableToPointInTimeAsync(ExportTableToPointInTimeRequest, CancellationToken) |
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window. |
|
GetItem(string, Dictionary<String, AttributeValue>) |
The
|
|
GetItem(string, Dictionary<String, AttributeValue>, bool) |
The
|
|
GetItem(GetItemRequest) |
The
|
|
GetItemAsync(string, Dictionary<String, AttributeValue>, CancellationToken) |
The
|
|
GetItemAsync(string, Dictionary<String, AttributeValue>, bool, CancellationToken) |
The
|
|
GetItemAsync(GetItemRequest, CancellationToken) |
The
|
|
GetResourcePolicy(GetResourcePolicyRequest) |
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
Because
After a |
|
GetResourcePolicyAsync(GetResourcePolicyRequest, CancellationToken) |
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
Because
After a |
|
ImportTable(ImportTableRequest) |
Imports table data from an S3 bucket. |
|
ImportTableAsync(ImportTableRequest, CancellationToken) |
Imports table data from an S3 bucket. |
|
ListBackups(ListBackupsRequest) |
List DynamoDB backups that are associated with an Amazon Web Services account and
weren't made with Amazon Web Services Backup. To list these backups for a given table,
specify In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call If you want to retrieve the complete list of backups made with Amazon Web Services Backup, use the Amazon Web Services Backup list API. |
|
ListBackupsAsync(ListBackupsRequest, CancellationToken) |
List DynamoDB backups that are associated with an Amazon Web Services account and
weren't made with Amazon Web Services Backup. To list these backups for a given table,
specify In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call If you want to retrieve the complete list of backups made with Amazon Web Services Backup, use the Amazon Web Services Backup list API. |
|
ListContributorInsights(ListContributorInsightsRequest) |
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes. |
|
ListContributorInsightsAsync(ListContributorInsightsRequest, CancellationToken) |
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes. |
|
ListExports(ListExportsRequest) |
Lists completed exports within the past 90 days. |
|
ListExportsAsync(ListExportsRequest, CancellationToken) |
Lists completed exports within the past 90 days. |
|
ListGlobalTables(ListGlobalTablesRequest) |
Lists all global tables that have a replica in the specified Region.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
ListGlobalTablesAsync(ListGlobalTablesRequest, CancellationToken) |
Lists all global tables that have a replica in the specified Region.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
ListImports(ListImportsRequest) |
Lists completed imports within the past 90 days. |
|
ListImportsAsync(ListImportsRequest, CancellationToken) |
Lists completed imports within the past 90 days. |
|
ListTables() |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTables(string) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTables(string, int) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTables(int) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTables(ListTablesRequest) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTablesAsync(CancellationToken) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTablesAsync(string, CancellationToken) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTablesAsync(string, int, CancellationToken) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTablesAsync(int, CancellationToken) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTablesAsync(ListTablesRequest, CancellationToken) |
Returns an array of table names associated with the current account and endpoint.
The output from |
|
ListTagsOfResource(ListTagsOfResourceRequest) |
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account. For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
ListTagsOfResourceAsync(ListTagsOfResourceRequest, CancellationToken) |
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account. For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
PutItem(string, Dictionary<String, AttributeValue>) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutItem(string, Dictionary<String, AttributeValue>, ReturnValue) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutItem(PutItemRequest) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutItemAsync(string, Dictionary<String, AttributeValue>, CancellationToken) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutItemAsync(string, Dictionary<String, AttributeValue>, ReturnValue, CancellationToken) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutItemAsync(PutItemRequest, CancellationToken) |
Creates a new item, or replaces an old item with a new item. If an item that has the
same primary key as the new item already exists in the specified table, the new item
completely replaces the existing item. You can perform a conditional put operation
(add a new item if one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the item's attribute
values in the same operation, using the When you add an item, the primary key attributes are the only required attributes. Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
To prevent a new item from replacing an existing item, use a conditional expression
that contains the
For more information about |
|
PutResourcePolicy(PutResourcePolicyRequest) |
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent.
|
|
PutResourcePolicyAsync(PutResourcePolicyRequest, CancellationToken) |
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent.
|
|
Query(QueryRequest) |
You must provide the name of the partition key attribute and a single value for that
attribute.
Use the
A
DynamoDB calculates the number of read capacity units consumed based on item size,
not on the amount of data that is returned to an application. The number of capacity
units consumed will be the same whether you request all of the attributes (the default
behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a
A single
A
You can query a table, a local secondary index, or a global secondary index. For a
query on a table or on a local secondary index, you can set the |
|
QueryAsync(QueryRequest, CancellationToken) |
You must provide the name of the partition key attribute and a single value for that
attribute.
Use the
A
DynamoDB calculates the number of read capacity units consumed based on item size,
not on the amount of data that is returned to an application. The number of capacity
units consumed will be the same whether you request all of the attributes (the default
behavior) or just some of them (using a projection expression). The number will also
be the same whether or not you use a
A single
A
You can query a table, a local secondary index, or a global secondary index. For a
query on a table or on a local secondary index, you can set the |
|
RestoreTableFromBackup(RestoreTableFromBackupRequest) |
Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
You can call You must manually set up the following on the restored table:
|
|
RestoreTableFromBackupAsync(RestoreTableFromBackupRequest, CancellationToken) |
Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
You can call You must manually set up the following on the restored table:
|
|
RestoreTableToPointInTime(RestoreTableToPointInTimeRequest) |
Restores the specified table to the specified point in time within When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table. Along with data, the following are also included on the new restored table using point in time recovery:
You must manually set up the following on the restored table:
|
|
RestoreTableToPointInTimeAsync(RestoreTableToPointInTimeRequest, CancellationToken) |
Restores the specified table to the specified point in time within When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table. Along with data, the following are also included on the new restored table using point in time recovery:
You must manually set up the following on the restored table:
|
|
Scan(string, List<String>) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
Scan(string, Dictionary<String, Condition>) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
Scan(string, List<String>, Dictionary<String, Condition>) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
Scan(ScanRequest) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
ScanAsync(string, List<String>, CancellationToken) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
ScanAsync(string, Dictionary<String, Condition>, CancellationToken) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
ScanAsync(string, List<String>, Dictionary<String, Condition>, CancellationToken) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
ScanAsync(ScanRequest, CancellationToken) |
The
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB,
the scan completes and results are returned to the user. The
A single
By default, a
DynamoDB does not provide snapshot isolation for a scan operation when the |
|
TagResource(TagResourceRequest) |
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
TagResourceAsync(TagResourceRequest, CancellationToken) |
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
TransactGetItems(TransactGetItemsRequest) |
DynamoDB rejects the entire
|
|
TransactGetItemsAsync(TransactGetItemsRequest, CancellationToken) |
DynamoDB rejects the entire
|
|
TransactWriteItems(TransactWriteItemsRequest) |
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
DynamoDB rejects the entire
|
|
TransactWriteItemsAsync(TransactWriteItemsRequest, CancellationToken) |
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
DynamoDB rejects the entire
|
|
UntagResource(UntagResourceRequest) |
Removes the association of tags from an Amazon DynamoDB resource. You can call
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
UntagResourceAsync(UntagResourceRequest, CancellationToken) |
Removes the association of tags from an Amazon DynamoDB resource. You can call
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide. |
|
UpdateContinuousBackups(UpdateContinuousBackupsRequest) |
Once continuous backups and point in time recovery are enabled, you can restore to
any point in time within
|
|
UpdateContinuousBackupsAsync(UpdateContinuousBackupsRequest, CancellationToken) |
Once continuous backups and point in time recovery are enabled, you can restore to
any point in time within
|
|
UpdateContributorInsights(UpdateContributorInsightsRequest) |
Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table. |
|
UpdateContributorInsightsAsync(UpdateContributorInsightsRequest, CancellationToken) |
Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table. |
|
UpdateGlobalTable(UpdateGlobalTableRequest) |
Adds or removes replicas in the specified global table. The global table must already
exist to be able to use this operation. Any replica to be added must be empty, have
the same name as the global table, have the same key schema, have DynamoDB Streams
enabled, and have the same provisioned and maximum write capacity units.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version). If you are using global tables Version
2019.11.21 you can use UpdateTable
instead.
Although you can use If global secondary indexes are specified, then the following conditions must also be met:
|
|
UpdateGlobalTableAsync(UpdateGlobalTableRequest, CancellationToken) |
Adds or removes replicas in the specified global table. The global table must already
exist to be able to use this operation. Any replica to be added must be empty, have
the same name as the global table, have the same key schema, have DynamoDB Streams
enabled, and have the same provisioned and maximum write capacity units.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version). If you are using global tables Version
2019.11.21 you can use UpdateTable
instead.
Although you can use If global secondary indexes are specified, then the following conditions must also be met:
|
|
UpdateGlobalTableSettings(UpdateGlobalTableSettingsRequest) |
Updates settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
UpdateGlobalTableSettingsAsync(UpdateGlobalTableSettingsRequest, CancellationToken) |
Updates settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should
be avoided for new global tables. Customers should use Global
Tables version 2019.11.21 (Current) when possible, because it provides greater
flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining
the global table version you are using. To update existing global tables from
version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading
global tables.
|
|
UpdateItem(string, Dictionary<String, AttributeValue>, Dictionary<String, AttributeValueUpdate>) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateItem(string, Dictionary<String, AttributeValue>, Dictionary<String, AttributeValueUpdate>, ReturnValue) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateItem(UpdateItemRequest) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateItemAsync(string, Dictionary<String, AttributeValue>, Dictionary<String, AttributeValueUpdate>, CancellationToken) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateItemAsync(string, Dictionary<String, AttributeValue>, Dictionary<String, AttributeValueUpdate>, ReturnValue, CancellationToken) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateItemAsync(UpdateItemRequest, CancellationToken) |
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same |
|
UpdateKinesisStreamingDestination(UpdateKinesisStreamingDestinationRequest) |
The command to update the Kinesis stream destination. |
|
UpdateKinesisStreamingDestinationAsync(UpdateKinesisStreamingDestinationRequest, CancellationToken) |
The command to update the Kinesis stream destination. |
|
UpdateTable(string, ProvisionedThroughput) |
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB
Streams settings for a given table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
You can only perform one of the following operations at once:
|
|
UpdateTable(UpdateTableRequest) |
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB
Streams settings for a given table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
You can only perform one of the following operations at once:
|
|
UpdateTableAsync(string, ProvisionedThroughput, CancellationToken) |
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB
Streams settings for a given table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
You can only perform one of the following operations at once:
|
|
UpdateTableAsync(UpdateTableRequest, CancellationToken) |
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB
Streams settings for a given table.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
You can only perform one of the following operations at once:
|
|
UpdateTableReplicaAutoScaling(UpdateTableReplicaAutoScalingRequest) |
Updates auto scaling settings on your global tables at once.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
|
|
UpdateTableReplicaAutoScalingAsync(UpdateTableReplicaAutoScalingRequest, CancellationToken) |
Updates auto scaling settings on your global tables at once.
For global tables, this operation only applies to global tables using Version 2019.11.21
(Current version).
|
|
UpdateTimeToLive(UpdateTimeToLiveRequest) |
The TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted. The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC. DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations. DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans. As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation. For more information, see Time To Live in the Amazon DynamoDB Developer Guide. |
|
UpdateTimeToLiveAsync(UpdateTimeToLiveRequest, CancellationToken) |
The TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted. The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC. DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations. DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans. As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation. For more information, see Time To Live in the Amazon DynamoDB Developer Guide. |
.NET:
Supported in: 8.0 and newer, Core 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5 and newer, 3.5