Which of the following solutions achieve this goal?
(Choose two.)
Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data to Amazon Glacier immediately after write. Use expedited retrievals when reading the data.
Write the records to Amazon Kinesis Data Firehose and configure Kinesis Data Firehose to deliver the data to Amazon S3 after 5 minutes. Set an expiration action at 30 days on the S3 bucket.
Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write data to Amazon S3 just before the Lambda execution stops.
Write the records to Amazon DynamoDB configured with a Time To Live (TTL) of 30 days. Read data using the GetItem or BatchGetItem call.
Write the records to an Amazon ElastiCache for Redis. Configure the Redis append-only file (AOF) persistence logs to write to Amazon S3. Recover from the log if the ElastiCache instance has failed.
Explanations:
Amazon S3 is suitable for storing data, but moving data to Amazon Glacier immediately after writing would not be cost-effective for the use case, as retrieval from Glacier (even expedited) incurs higher costs and isn’t necessary given the 5-minute delay for reads.
Writing records to Kinesis Data Firehose and delivering them to S3 after 5 minutes meets the requirement of acceptable read delay. Setting an expiration action for 30 days on the S3 bucket ensures data is automatically deleted after the retention period, which is cost-effective.
While using AWS Lambda to aggregate data for 5 minutes may work, the complexity of invoking Lambda via API Gateway and timing issues makes it less ideal for this specific use case. Additionally, it does not specify an efficient storage solution for persistence.
Storing records in DynamoDB with TTL configured for 30 days allows for efficient data retrieval and automatic expiration. The data is retained for the required duration and is accessible with minimal cost and effort, aligning with the needs.
While ElastiCache can persist data, using it for this use case is not cost-effective since it is designed for caching rather than long-term storage. Also, relying on AOF persistence for recovery can introduce complexity and delays that are not aligned with the data’s read frequency and retention requirements.