What changes to the current architecture will reduce operational overhead and support the product release?
Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3.
Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Explanations:
While creating an EC2 Auto Scaling group and additional read replicas would help with scaling and availability, it does not address the use of a managed service for Kafka, which can reduce operational overhead. Using Amazon Kinesis data streams is an alternative to Kafka but may not be a direct replacement for existing Kafka workloads. Storing static content in S3 is a good practice, but this option lacks overall improvement in operational efficiency compared to managed services.
This option includes an EC2 Auto Scaling group and read replicas, which aid in handling increased load. However, it still requires managing the Kafka cluster on EC2 instances, which increases operational overhead. While Multi-AZ deployment and storage auto-scaling improve database availability and performance, this option does not leverage managed services that could further reduce operational overhead.
Deploying on a Kubernetes cluster improves flexibility and management but still requires operational overhead related to managing the Kubernetes infrastructure. Although Multi-AZ and storage auto-scaling are mentioned for the DB, creating an Amazon Managed Streaming for Apache Kafka would provide a better operational model than managing Kafka on EC2. While using CloudFront is beneficial for static content delivery, the overall operational burden remains high due to Kubernetes management.
This option proposes deploying the application on Amazon EKS with AWS Fargate, which automatically manages the infrastructure for containerized applications, thereby significantly reducing operational overhead. Multi-AZ deployment and enabling storage auto-scaling for the database enhances availability and performance. Using an Amazon Managed Streaming for Apache Kafka cluster reduces the need for operational management of Kafka. Storing static content in S3 and serving it through CloudFront improves efficiency in content delivery. Overall, this option effectively addresses scalability, availability, and operational management.