Which solution will meet these requirements?
Create a scaling policy that will scale the application based on the ActiveConnectionCount Amazon CloudWatch metric that is generated from the ELB.
Create a scaling policy that will scale the application based on the mem_used Amazon CloudWatch metric that is generated from the ELB.
Create a scheduled scaling policy to increase the number of EC2 instances in the Auto Scaling group to support additional connections.
Create and deploy a script on the ELB to expose the number of connected users as a custom Amazon CloudWatch metric. Create a scaling policy that uses the metric.
Explanations:
The ActiveConnectionCount metric from the Elastic Load Balancer (ELB) reflects the number of active connections to the application, which directly correlates to user traffic. A scaling policy based on this metric would allow the Auto Scaling group to scale the application based on the number of users connected.
Themem_usedmetric is related to memory usage on the EC2 instances, but scaling based on memory usage is not directly related to the number of users connecting to the application. Scaling based on ActiveConnectionCount is more relevant for handling user traffic.
A scheduled scaling policy adjusts the number of EC2 instances at specific times or intervals, but it does not dynamically scale based on the actual number of users or traffic. This method doesn’t meet the requirement of scaling based on user connections.
While creating a custom metric for connected users is a possible solution, it adds unnecessary complexity and overhead. The ActiveConnectionCount metric from the ELB already provides the necessary information to scale based on user connections without requiring additional custom metrics or scripts.