Which combination of steps will meet these requirements with the LEAST operational overhead?
(Choose two.)
Use an AWS Lambda function to resize the EKS cluster.
Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Use Amazon API Gateway and connect it to Amazon EKS.
Use AWS App Mesh to observe network activity.
Explanations:
Using an AWS Lambda function to resize the EKS cluster adds operational overhead as it requires custom development and management of the Lambda function. EKS offers built-in scaling features that can be utilized without the need for custom code.
The Kubernetes Metrics Server enables horizontal pod autoscaling based on resource utilization metrics. This allows EKS to automatically adjust the number of pods based on the current workload, which is essential for managing variable workloads efficiently.
The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in an EKS cluster based on the resource needs of the pods running in the cluster. This minimizes operational overhead as it requires no manual intervention once configured.
Amazon API Gateway is primarily for creating, deploying, and managing APIs. While it can connect to EKS, it does not inherently manage scaling of container applications, thus not addressing the requirement of scaling based on workload.
AWS App Mesh is a service mesh that provides application-level networking. It helps in observing and managing microservices but does not directly contribute to autoscaling of workloads within EKS. This does not meet the requirement of workload scaling.