PERF03-BP02 Evaluate available configuration options for data store - AWS Well-Architected Framework

PERF03-BP02 Evaluate available configuration options for data store

Understand and evaluate the various features and configuration options available for your data stores to optimize storage space and performance for your workload.

Common anti-patterns:

  • You only use one storage type, such as Amazon EBS, for all workloads.

  • You use provisioned IOPS for all workloads without real-world testing against all storage tiers.

  • You are not aware of the configuration options of your chosen data management solution.

  • You rely solely on increasing instance size without looking at other available configuration options.

  • You are not testing the scaling characteristics of your data store.

Benefits of establishing this best practice: By exploring and experimenting with the data store configurations, you may be able to reduce the cost of infrastructure, improve performance, and lower the effort required to maintain your workloads.

Level of risk exposed if this best practice is not established: Medium

Implementation guidance

A workload could have one or more data stores used based on data storage and access requirements. To optimize your performance efficiency and cost, you must evaluate data access patterns to determine the appropriate data store configurations. While you explore data store options, take into consideration various aspects such as the storage options, memory, compute, read replica, consistency requirements, connection pooling, and caching options. Experiment with these various configuration options to improve performance efficiency metrics.

Implementation steps

  • Understand the current configurations (like instance type, storage size, or database engine version) of your data store.

  • Review AWS documentation and best practices to learn about recommended configuration options that can help improve the performance of your data store. Key data store options to consider are the following:

    Configuration option Examples
    Offloading reads (like read replicas and caching)
    • For DynamoDB tables, you can offload reads using DAX for caching.

    • You can create an Amazon ElastiCache (Redis OSS) cluster and configure your application to read from the cache first, falling back to the database if the requested item is not present.

    • Relational databases such as Amazon RDS and Aurora, and provisioned NoSQL databases such as Neptune and Amazon DocumentDB all support adding read replicas to offload the read portions of the workload.

    • Serverless databases such as DynamoDB will scale automatically. Ensure that you have enough read capacity units (RCU) provisioned to handle the workload.

    Scaling writes (like partition key sharding or introducing a queue)
    • For relational databases, you can increase the size of the instance to accommodate an increased workload or increase the provisioned IOPs to allow for an increased throughput to the underlying storage.

    • You can also introduce a queue in front of your database rather than writing directly to the database. This pattern allows you to decouple the ingestion from the database and control the flow-rate so the database does not get overwhelmed. 

    • Batching your write requests rather than creating many short-lived transactions can help improve throughput in high-write volume relational databases.

    • Serverless databases like DynamoDB can scale the write throughput automatically or by adjusting the provisioned write capacity units (WCU) depending on the capacity mode. 

    • You can still run into issues with hot partitions when you reach the throughput limits for a given partition key. This can be mitigated by choosing a more evenly distributed partition key or by write-sharding the partition key. 

    Policies to manage the lifecycle of your datasets
    • You can use Amazon S3 Lifecycle to manage your objects throughout their lifecycle. If your access patterns are unknown, changing, or unpredictable, you can use Amazon S3 Intelligent-Tiering, which monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers. You can leverage Amazon S3 Storage Lens metrics to identify optimization opportunities and gaps in lifecycle management.

    • Amazon EFS lifecycle management automatically manages file storage for your file systems.

    Connection management and pooling
    • Amazon RDS Proxy can be used with Amazon RDS and Aurora to manage connections to the database. 

    • Serverless databases such as DynamoDB do not have connections associated with them, but consider the provisioned capacity and automatic scaling policies to deal with spikes in load.

  • Perform experiments and benchmarking in non-production environment to identify which configuration option can address your workload requirements.

  • Once you have experimented, plan your migration and validate your performance metrics.

  • Use AWS monitoring (like Amazon CloudWatch) and optimization (like Amazon S3 Storage Lens) tools to continuously optimize your data store using real-world usage pattern.

Resources

Related documents:

Related videos:

Related examples: