Which set actions should a solutions architect take to improve response times?
Create separate Auto Scaling groups based on device types. Switch to Network Load Balancer (NLB). Use the User-Agent HTTP header in the NLB to route to a different set of EC2 instances.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User-Agent HTTP header.
Create a separate ALB for each device type. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use the User-Agent HTTP header to load different content.
Explanations:
While creating separate Auto Scaling groups based on device types may improve resource allocation, switching to a Network Load Balancer is not optimal for HTTP/HTTPS traffic, which is better handled by an Application Load Balancer. Routing based on User-Agent at the NLB level is also not standard practice.
Moving content to Amazon S3 and using CloudFront as a content delivery network (CDN) will significantly reduce load times by caching static content closer to users. Lambda@Edge allows for customization based on the User-Agent, providing the right resources for different devices efficiently.
Creating separate ALBs for each device type complicates the architecture and may introduce unnecessary overhead. Using Route 53 to route based on the User-Agent is not as efficient or scalable compared to leveraging CloudFront and S3 for static content.
Although moving content to S3 and using CloudFront is beneficial, the option does not specify using Lambda@Edge, which would be necessary to dynamically serve different content based on the User-Agent header. This limits its effectiveness in tailoring the experience for different devices.