Which solution meets these requirements?
Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
Explanations:
Using Amazon EMR with Apache Hive adds unnecessary complexity and requires coding, which contradicts the requirement for minimal coding. Additionally, this approach may impact application availability and RCUs during the export process.
Exporting data directly from DynamoDB to Amazon S3 is a straightforward solution that allows for continuous backups without coding. Turning on point-in-time recovery (PITR) ensures that data can be restored to any second within the retention period, meeting the requirement for minimal impact on availability and RCUs.
While using DynamoDB Streams with AWS Lambda can facilitate real-time data export, it introduces additional complexity and coding. This method may also affect read capacity units and does not provide the same level of continuous backup as the direct export option.
Although an AWS Lambda function can automate data exports, this option involves more coding and setup than needed for continuous backups. It may also not provide the same reliability and minimal impact on availability as exporting directly to S3.