REL10-BP01 Deploy the workload to multiple locations
Distribute workload data and resources across multiple Availability Zones or, where necessary, across AWS Regions.
A fundamental principle for service design in AWS is to avoid single points of failure, including the underlying physical infrastructure. AWS provides cloud computing resources and services globally across multiple geographic locations called Regions. Each Region is physically and logically independent and consists of three or more Availability Zones (AZs). Availability Zones are geographically close to each other but are physically separated and isolated. When you distribute your workloads among Availability Zones and Regions, you mitigate the risk of threats such as fires, floods, weather-related disasters, earthquakes, and human error.
Create a location strategy to provide high availability that is appropriate for your workloads.
Desired outcome: Production workloads are distributed among multiple Availability Zones (AZs) or Regions to achieve fault tolerance and high availability.
Common anti-patterns:
-
Your production workload exists only in a single Availability Zone.
-
You implement a multi-Region architecture when a multi-AZ architecture would satisfy business requirements.
-
Your deployments or data become desynchronized, which results in configuration drift or under-replicated data.
-
You don't account for dependencies between application components if resilience and multi-location requirements differ between those components.
Benefits of establishing this best practice:
-
Your workload is more resilient to incidents, such as power or environmental control failures, natural disasters, upstream service failures, or network issues that impact an AZ or an entire Region.
-
You can access a wider inventory of Amazon EC2 instances and reduce the likelihood of InsufficientCapacityExceptions (ICE) when launching specific EC2 instance types.
Level of risk exposed if this best practice is not established: High
Implementation guidance
Deploy and operate all production workloads in at least two Availability Zones (AZs) in a Region.
Using multiple Availability Zones
Availability Zones are resource hosting locations that are physically separated from each other to avoid correlated failures due to risks such as fires, floods, and tornadoes. Each Availability Zone has independent physical infrastructure, including utility power connections, backup power sources, mechanical services, and network connectivity. This arrangement limits faults in any of these components to just the impacted Availability Zone. For example, if an AZ-wide incident makes EC2 instances unavailable in the affected Availability Zone, your instances in other Availability Zone remains available.
Despite being physically separated, Availability Zones in the same AWS Region are close enough to provide high-throughput, low-latency (single-digit millisecond) networking. You can replicate data synchronously between Availability Zones for most workloads without significantly impacting user experience. This means you can use Availability Zones in a Region in an active/active or active/standby configuration.
All compute associated with your workload should be distributed
among multiple Availability Zones. This includes
Amazon EC2
You should also replicate data for your workload and make it
available in multiple Availability Zones. Some AWS managed data
services, such as
Amazon S3
If you are using self-managed storage, such as
Amazon Elastic Block Store (EBS)
Using multiple AWS Regions
If you have workloads that require extreme resilience (such as critical infrastructure, health-related applications, or services with stringent customer or mandated availability requirements), you may require additional availability beyond what a single AWS Region can provide. In this case, you should deploy and operate your workload across at least two AWS Regions (assuming that your data residency requirements allow it).
AWS Regions are located in different geographical regions around the world and in multiple continents. AWS Regions have even greater physical separation and isolation than Availability Zones alone. AWS services, with few exceptions, take advantage of this design to operate fully independently between different Regions (also known as Regional services). A failure of an AWS Regional service is designed not to impact the service in a different Region.
When you operate your workload in multiple Regions, you should consider additional requirements. Because resources in different Regions are separate from and independent of one another, you must duplicate your workload's components in each Region. This includes foundational infrastructure, such as VPCs, in addition to compute and data services.
NOTE: When you consider a multi-Regional design, verify that your workload is capable of running in a single Region. If you create dependencies between Regions where a component in one Region relies on services or components in a different Region, you can increase the risk of failure and significantly weaken your reliability posture.
To ease multi-Regional deployments and maintain consistency,
AWS CloudFormation StackSets can replicate your entire AWS
infrastructure across multiple Regions.
AWS CloudFormation
You must also replicate your data across each of your chosen
Regions. Many AWS managed data services provide cross-Regional
replication capability, including Amazon S3, Amazon DynamoDB,
Amazon RDS, Amazon Aurora, Amazon Redshift, Amazon Elasticache,
and Amazon EFS.
Amazon DynamoDB global tables
AWS also provides the ability to route request traffic to your
Regional deployments with great flexibility. For example, you can
configure your DNS records using
Amazon Route 53
Even if you choose not to operate in multiple Regions for high availability, consider multiple Regions as part of your disaster recovery (DR) strategy. If possible, replicate your workload's infrastructure components and data in a warm standby or pilot light configuration in a secondary Region. In this design, you replicate baseline infrastructure from the primary Region such as VPCs, Auto Scaling groups, container orchestrators, and other components, but you configure the variable-sized components in the standby Region (such as the number of EC2 instances and database replicas) to be a minimally-operable size. You also arrange for continuous data replication from the primary Region to the standby Region. If an incident occurs, you can then scale out, or grow, the resources in the standby Region, and then promote it to become the primary Region.
Implementation steps
-
Work with business stakeholders and data residency experts to determine which AWS Regions can be used to host your resources and data.
-
Work with business and technical stakeholders to evaluate your workload, and determine whether its resilience needs can be met by a multi-AZ approach (single AWS Region) or if they require a multi-Region approach (if multiple Regions are permitted). The use of multiple Regions can achieve greater availability but can involve additional complexity and cost. Consider the following factors in your evaluation:
-
Business objectives and customer requirements: How much downtime is permitted should a workload-impacting incident occur in an Availability Zone or a Region? Evaluate your recovery point objectives as discussed in REL13-BP01 Define recovery objectives for downtime and data loss.
-
Disaster recovery (DR) requirements: What kind of potential disaster do you want to insure yourself against? Consider the possibility of data loss or long-term unavailability at different scopes of impact from a single Availability Zone to an entire Region. If you replicate data and resources across Availability Zones, and a single Availability Zone experiences a sustained failure, you can recover service in another Availability Zone. If you replicate data and resources across Regions, you can recover service in another Region.
-
-
Deploy your compute resources into multiple Availability Zones.
-
In your VPC, create multiple subnets in different Availability Zones. Configure each to be large enough to accommodate the resources needed to serve the workload, even during an incident. For more detail, see REL02-BP03 Ensure IP subnet allocation accounts for expansion and availability.
-
If you are using Amazon EC2 instances, use EC2 Auto Scaling
to manage your instances. Specify the subnets you chose in the previous step when you create your Auto Scaling groups. -
If you are using AWS Fargate compute for Amazon ECS or Amazon EKS, select the subnets you chose in the first step when you create an ECS Service, launch an ECS task, or create a Fargate profile for EKS.
-
If you are using AWS Lambda functions that need to run in your VPC, select the subnets you chose in the first step when you create the Lambda function. For any functions that do not have a VPC configuration, AWS Lambda manages availability for you automatically.
-
Place traffic directors such as load balancers in front of your compute resources. If cross-zone load balancing is enabled, AWS Application Load Balancers and Network Load Balancers detect when targets such as EC2 instances and containers are unreachable due to Availability Zone impairment and reroute traffic towards targets in healthy Availability Zones. If you disable cross-zone load balancing, use Amazon Route 53 Application Recovery Controller (ARC) to provide zonal shift capability. If you are using a third-party load balancer or have implemented your own load balancers, configure them with multiple front ends across different Availability Zones.
-
-
Replicate your workload's data across multiple Availability Zones.
-
If you use an AWS-managed data service such as Amazon RDS, Amazon ElastiCache, or Amazon FSx, study its user guide to understand its data replication and resilience capabilities. Enable cross-AZ replication and failover if necessary.
-
If you use AWS-managed storage services such as Amazon S3, Amazon EFS, and Amazon FSx, avoid using single-AZ or One Zone configurations for data that requires high durability. Use a multi-AZ configuration for these services. Check the respective service's user guide to determine whether multi-AZ replication is enabled by default or whether you must enable it.
-
If you run a self-managed database, queue, or other storage service, arrange for multi-AZ replication according to the application's instructions or best practices. Familiarize yourself with the failover procedures for your application.
-
-
Configure your DNS service to detect AZ impairment and reroute traffic to a healthy Availability Zone. Amazon Route 53, when used in combination with Elastic Load Balancers, can do this automatically. Route 53 can also be configured with failover records that use health checks to respond to queries with only healthy IP addresses. For any DNS records used for failover, specify a short time to live (TTL) value (for example, 60 seconds or less) to help prevent record caching from impeding recovery (Route 53 alias records supply appropriate TTLs for you).
Additional steps when using multiple AWS Regions
-
Replicate all operating system (OS) and application code used by your workload across your selected Regions. Replicate Amazon Machine Images (AMIs) used by your EC2 instances if necessary using solutions such as Amazon EC2 Image Builder. Replicate container images stored in registries using solutions such as Amazon ECR cross-Region replication. Enable Regional replication for any Amazon S3 buckets used for storing application resources.
-
Deploy your compute resources and configuration metadata (such as parameters stored in AWS Systems Manager Parameter Store) into multiple Regions. Use the same procedures described in previous steps, but replicate the configuration for each Region you are using for your workload. Use infrastructure as code solutions such as AWS CloudFormation to uniformly reproduce the configurations among Regions. If you are using a secondary Region in a pilot light configuration for disaster recovery, you may reduce the number of your compute resources to a minimum value to save cost, with a corresponding increase in time to recovery.
-
Replicate your data from your primary Region into your secondary Regions.
-
Amazon DynamoDB global tables provide global replicas of your data that can be written to from any supported Region. With other AWS-managed data services, such as Amazon RDS, Amazon Aurora, and Amazon Elasticache, you designate a primary (read/write) Region and replica (read-only) Regions. Consult the respective services' user and developer guides for details on Regional replication.
-
If you are running a self-managed database, arrange for multi-Region replication according to the application's instructions or best practices. Familiarize yourself with the failover procedures for your application.
-
If your workload uses AWS EventBridge, you may need to forward selected events from your primary Region to your secondary Regions. To do so, specify event buses in your secondary Regions as targets for matched events in your primary Region.
-
-
Consider whether and to what extent you want to use identical encryption keys across Regions. A typical approach that balances security and ease of use is to use Region-scoped keys for Region-local data and authentication, and use globally-scoped keys for encryption of data that is replicated among different Regions. AWS Key Management Service (KMS)
supports multi-region keys to securely distribute and protect keys shared across Regions. -
Consider AWS Global Accelerator to improve the availability of your application by directing traffic to Regions that contain healthy endpoints.
Resources
Related best practices:
Related documents:
-
Amazon EC2 Auto Scaling: Example: Distribute instances across Availability Zones
-
How Amazon ECS places tasks on container instances (includes Fargate)
-
Amazon Elasticache for Redis OSS: Replication across AWS Regions using global datastores
-
Amazon Route 53 Application Recovery Controller Developer Guide
-
Sending and receiving Amazon EventBridge events between AWS Regions
-
Creating a Multi-Region Application with AWS Services blog series
-
Disaster Recovery (DR) Architecture on AWS, Part I: Strategies for Recovery in the Cloud
-
Disaster Recovery (DR) Architecture on AWS, Part III: Pilot Light and Warm Standby
Related videos: