Which solutions will meet these requirements?
(Choose two.)
Launch multiple medium-sized instances in a distributed SageMaker Processing job. Use the prebuilt Docker images for Apache Spark to query and plot the relevant data and to export the relevant data from Amazon Redshift to Amazon S3.
Launch multiple medium-sized notebook instances with a PySpark kernel in distributed mode. Download the data from Amazon Redshift to the notebook cluster. Query and plot the relevant data. Export the relevant data from the notebook cluster to Amazon S3.
Use AWS Secrets Manager to store the Amazon Redshift credentials. From a SageMaker Studio notebook, use the stored credentials to connect to Amazon Redshift with a Python adapter. Use the Python client to query the relevant data and to export the relevant data from Amazon Redshift to Amazon S3.
Use AWS Secrets Manager to store the Amazon Redshift credentials. Launch a SageMaker extra-large notebook instance with block storage that is slightly larger than 10 TB. Use the stored credentials to connect to Amazon Redshift with a Python adapter. Download, query, and plot the relevant data. Export the relevant data from the local notebook drive to Amazon S3.
Use SageMaker Data Wrangler to query and plot the relevant data and to export the relevant data from Amazon Redshift to Amazon S3.
Explanations:
Launching multiple medium-sized instances in a distributed SageMaker Processing job with Apache Spark for querying and exporting data is not cost-effective for handling 10 TB of data. This approach could be inefficient and expensive given the data size.
Launching multiple medium-sized notebook instances with a PySpark kernel in distributed mode to handle 10 TB of data is inefficient. Downloading all the data and storing it locally in the notebook cluster is not a scalable or cost-effective solution.
Using AWS Secrets Manager to securely store Redshift credentials and accessing the data from a SageMaker Studio notebook via a Python adapter allows efficient querying and exporting of only the relevant data from Amazon Redshift to Amazon S3. This approach is secure and cost-effective.
Using a large SageMaker notebook instance with local block storage to handle 10 TB of data is not scalable. Storing such a large volume of data locally would be inefficient and costly. Additionally, notebook instances are not intended for large-scale data processing.
SageMaker Data Wrangler is designed for efficiently querying and exporting data from various sources, including Amazon Redshift. It provides a simple and cost-effective way to work with large datasets and export the relevant data to Amazon S3.