Which solution meets these requirements?
Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
Explanations:
The custom script solution requires setting up an EC2 instance and cron job, which introduces additional operational overhead. Managing EC2 instances for this task is less efficient than serverless options like AWS Lambda.
AWS Lambda is a serverless solution that can export data using the SELECT INTO OUTFILE S3 statement. This approach meets the operational efficiency requirement with automated weekly execution using Amazon EventBridge (CloudWatch Events).
While Lambda functions can be used, using mysqldump is not optimal for this use case. The process is heavier and less efficient than using the SELECT INTO OUTFILE S3 approach, as mysqldump is generally slower and resource-intensive.
AWS DMS is not suited for exporting data from Aurora MySQL to S3 for archival purposes. DMS is typically used for replication and ongoing data migration. Using AWS Data Pipeline adds unnecessary complexity for this task.