What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?
Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling group of G2 instances in a placement group.
Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group.
Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization).
Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).
Explanations:
AWS Data Pipeline is suitable for managing data workflows, but it does not inherently manage the complexity of assessments and task orchestration. G2 instances are older GPU instances and may not provide the best performance for CUDA workloads, especially if low latency networking is required.
Amazon Simple Workflow (SWF) can effectively manage the orchestration of assessments and data movement. This allows for a hybrid approach combining human and automated assessments. G2 instances in a placement group provide low-latency networking for GPU-accelerated tasks, although G3 or newer instances would generally be preferred for better performance.
While SWF is suitable for managing assessments and data movement, C3 instances are not GPU instances and therefore do not support CUDA applications. SR-IOV is beneficial for low-latency I/O but does not address the requirement for GPU capabilities in this context.
AWS Data Pipeline is not as effective for managing assessments as SWF. While it can handle data movement, it lacks the orchestration features of SWF. C3 instances, similar to option C, do not provide GPU support needed for CUDA tasks, making this option unsuitable for the requirements.