Which actions should be taken to meet these requirements?
Store the data in Amazon DocumentDB. Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda. Assign the company’s domain as an alternate domain for the distribution, and configure Amazon Route 53 with an alias to the CloudFront distribution.
Store the data in replicated Amazon S3 buckets in two Regions. Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region. Assign the company’s domain as an alternate domain for both distributions, and configure Amazon Route 53 with a failover routing policy between them.
Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record in the company’s domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs.
Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. Configure the instances to download the web service code in the user data. In Amazon Route 53, configure an alias record for the company’s domain and a multi-value routing policy
Explanations:
While using Amazon DocumentDB and CloudFront can help with latency and caching, DocumentDB is not the best choice for fault tolerance across regions as it is not designed for global distribution. Also, using a single global CloudFront distribution with a custom origin may not handle spikes efficiently, particularly if the backend services are in one region.
Storing data in replicated S3 buckets in two regions provides durability but not real-time read/write access, as S3 is primarily an object store. Additionally, using CloudFront with API Gateway and Lambda might not provide the required performance for spikes in load, and failover routing may lead to increased latency during traffic spikes.
Storing data in DynamoDB global tables enables fault tolerance and low-latency access across regions. Using on-demand capacity mode allows the service to automatically scale to handle spikes in traffic efficiently. Running the web service in ECS Fargate with Auto Scaling ensures that the service can dynamically adjust to varying loads, and the latency-based routing in Route 53 provides optimal traffic distribution.
While Amazon Aurora global databases support multi-region fault tolerance and provide good performance, the use of EC2 instances in Auto Scaling groups can introduce more complexity and longer startup times compared to ECS Fargate. Additionally, configuring instances to download the service code in user data could lead to longer deployment times, making it less optimal for handling spikes in load.