Which solution meets these requirements?
Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
Explanations:
Using Amazon EMR and Apache Hive requires more complex setup and coding, which goes against the requirement for minimal coding. Additionally, this method may not provide continuous backups and can affect application availability.
This option allows for direct export of data from DynamoDB to S3, enabling continuous backups with minimal coding. Enabling point-in-time recovery ensures data can be restored without impacting the availability or read capacity of the application.
While using DynamoDB Streams and AWS Lambda is a valid approach to back up data, it involves more coding and configuration. This method could potentially impact read capacity units and application availability during processing.
Although turning on point-in-time recovery is beneficial, regularly scheduled Lambda functions can add complexity and require more coding than the required minimal coding approach. It may also impact the application’s performance.