What is the MOST secure and cost-effective solution to meet these requirements?
Archive the data to Amazon S3 and apply a restrictive bucket policy to deny the s3:DeleteObject API.
Archive the data to Amazon S3 Glacier and apply a Vault Lock policy.
Archive the data to Amazon S3 and replicated it to a second bucket in a second AWS Region. Choose the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and apply a restrictive bucket policy to deny the s3:DeleteObject API.
Migrate the log data to a 16 TB Amazon Elastic Block Store (Amazon EBS) volume. Create a snapshot of the EBS volume.
Explanations:
Archiving to Amazon S3 does not provide long-term retention guarantees, as objects can still be deleted if the bucket policy does not enforce strict object versioning or other protections. A restrictive bucket policy may not be enough to prevent deletion by authorized users.
Archiving to Amazon S3 Glacier offers a cost-effective solution for long-term storage and compliance. Applying a Vault Lock policy ensures that once the policy is set, it cannot be changed or deleted, thus ensuring the data retention requirement is met securely.
While using S3 Standard-IA and replicating to another region adds redundancy, it does not provide the same level of cost-effectiveness or compliance assurance as Glacier. Additionally, a restrictive bucket policy alone does not ensure that the data is secure from deletion, especially if the account has users with appropriate permissions.
Migrating log data to an Amazon EBS volume is not cost-effective for long-term storage compared to S3 Glacier. EBS volumes are meant for active data use, and while snapshots can be created for backup, this option does not meet the long-term retention needs as effectively as Glacier does.