Which combination of steps will meet these requirements with the MOST operational efficiency?
(Choose three.)
Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.
Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.
Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.
Use federated queries to access data in Amazon RDS.
Use data sharing to access data from the other Redshift clusters.
Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.
Explanations:
Creating external tables in Redshift for Glue Data Catalog tables allows seamless querying of data registered in Glue, enabling access without manual data movement.
Federated queries in Redshift enable direct access to Amazon RDS for PostgreSQL databases, allowing data to be queried without the need for intermediate steps like unloading/loading data.
Redshift data sharing allows one cluster to access data in other Redshift clusters, making it efficient to share and query data without requiring data duplication or export/import operations.
Taking snapshots and restoring into the existing cluster is unnecessary and inefficient compared to data sharing, which allows real-time access to data from other Redshift clusters without data movement.
Unloading and copying data from RDS and other Redshift clusters to S3 introduces unnecessary data movement steps; federated queries and data sharing provide more efficient access methods.
Using Glue jobs to transfer data to S3 and creating external tables is complex and unnecessary; directly creating external tables in Redshift for Glue Data Catalog tables is more operationally efficient.