Which solution will meet these requirements with the LEAST operational overhead?
Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Explanations:
Amazon RDS can provide low-latency access, but managing periodic exports to S3 introduces operational overhead. This option does not facilitate the required sub-millisecond latency for frequent access.
Storing data in S3 and using Athena for queries is suitable for one-time queries, but S3 does not provide sub-millisecond latency for frequently accessed data, making it unsuitable for the application’s needs.
Using DynamoDB with DAX offers sub-millisecond read latency for frequently accessed data. Exporting the data to S3 and querying it with Athena for one-time queries minimizes operational overhead while fulfilling both access and querying needs.
While DynamoDB can provide low-latency access, introducing Kinesis for streaming and Firehose for storage adds unnecessary complexity and operational overhead, which does not align with the requirement for minimal operational overhead.