What changes to the current architecture will reduce operational overhead and support the product release?
Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Explanations:
While this option suggests creating an EC2 Auto Scaling group and read replicas, it does not address the operational overhead effectively. Using Amazon Kinesis data streams is a viable alternative to Kafka but does not provide the same level of integration. Additionally, serving static content from S3 is beneficial, but this option lacks the deployment of a managed service like EKS or Fargate for better resource management and scaling.
This option proposes an EC2 Auto Scaling group and read replicas, which is good for scaling and availability, but it does not utilize modern container orchestration. It misses the opportunity to reduce operational overhead by using managed services like Amazon EKS or Fargate. The storage auto-scaling feature helps, but overall, the lack of container management and the alternative to Kafka limits its effectiveness.
Although this option includes deploying the application on a Kubernetes cluster and offers Multi-AZ for the database, it does not utilize managed services, leading to increased operational overhead. The use of an Amazon Managed Streaming for Apache Kafka cluster is a positive change, but deploying and managing a Kubernetes cluster on EC2 instances is more complex compared to managed options. While it addresses static content distribution with CloudFront, it still lacks an effective scaling solution.
This option effectively utilizes Amazon EKS with AWS Fargate for deploying applications, significantly reducing operational overhead through managed infrastructure. The inclusion of an Application Load Balancer enables scaling, while Multi-AZ mode for the database enhances availability. The suggestion to use an Amazon Managed Streaming for Apache Kafka cluster aligns with best practices for data streaming, and serving static content through S3 and CloudFront enhances performance and scalability. Overall, this option best addresses the expected increase in orders while minimizing operational complexity.