Which of the following cannot be done using AWS Data Pipeline?
Create complex data processing workloads that are fault tolerant, repeatable, and highly available.
Regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to another AWS service.
Generate reports over data that has been stored.
Move data between different AWS compute and storage services as well as on premise data sources at specified intervals.
Explanations:
AWS Data Pipeline is designed to create complex, fault-tolerant, repeatable, and highly available data processing workflows, so this can be done.
AWS Data Pipeline enables regular access to data, its transformation, processing at scale, and transfer to other AWS services, making this achievable.
AWS Data Pipeline does not directly generate reports. It is used for data movement, transformation, and processing, but not for reporting.
AWS Data Pipeline can move data between AWS compute, storage services, and on-premise data sources at specified intervals, fulfilling this functionality.