Which solution will meet these requirements with the LEAST operational overhead?
Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Explanations:
Amazon RDS is suitable for transactional data but is not optimized for sub-millisecond latency. Periodic custom scripts add operational overhead and do not provide the desired performance for real-time access and historical querying.
Storing data directly in Amazon S3 does not meet the requirement of sub-millisecond latency for frequently accessed data. Amazon Athena can run queries on data in S3, but it cannot provide sub-millisecond access for real-time use.
Amazon DynamoDB with DAX provides sub-millisecond latency for frequently accessed data. DynamoDB table export to S3 allows for efficient long-term storage and querying of historical data using Athena, providing the required solution with minimal operational overhead.
This option introduces unnecessary complexity with Kinesis Data Streams and Firehose. DynamoDB itself already offers low-latency access, and adding streaming to Firehose does not provide a more efficient solution for this use case.