Which solution will meet these requirements with the LEAST operational overhead?
Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target value for the metric to 60%.
Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.
Create an Amazon EventBridge (Amazon CloudWatch Events) event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.
Explanations:
A dynamic scaling policy is based on metrics like CPU utilization, but it scales in response to real-time changes, not based on a scheduled time. The policy would not preemptively scale 30 minutes before the batch job runs.
A scheduled scaling policy can set the desired capacity for specific times, but it does not scale based on forecasted demand, and it doesn’t preemptively scale the instances based on CPU utilization.
A predictive scaling policy uses historical data to forecast demand and adjusts capacity in advance. It can automatically scale the capacity 30 minutes before the jobs run, meeting the requirement with minimal manual intervention.
An EventBridge event based on CPU utilization would react to real-time data, not schedule scaling in advance. This would lead to higher operational overhead and does not guarantee scaling 30 minutes before the job.