Which solution would have the MOST scalability and LOWEST latency?
Configure a Network Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers.
Configure an Application Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers.
Configure a Network Load Balancer with a TCP listener to pass through TLS traffic to the containers.
Configure Amazon Route to use multivalue answer routing to send traffic to the containers.
Explanations:
Terminating TLS at a Network Load Balancer introduces additional latency due to the re-encryption process before traffic reaches the containers. While this provides encryption, the need for re-encryption can hinder scalability and increase latency under volatile traffic patterns.
An Application Load Balancer also terminates TLS, which adds latency due to the re-encryption step. Although ALBs can handle HTTP/HTTPS traffic efficiently, they are not optimized for low-latency, high-throughput scenarios like TCP traffic, especially in volatile conditions.
A Network Load Balancer configured with a TCP listener allows for pass-through of TLS traffic directly to the containers. This minimizes latency as the traffic is not decrypted and re-encrypted. Additionally, it is highly scalable and can handle sudden spikes in traffic efficiently.
While using Amazon Route 53 with multivalue answer routing can help distribute traffic, it does not provide end-to-end encryption directly to the containers. This option also lacks the scalability and low-latency benefits of a dedicated load balancer solution, making it less suitable for the requirements.