Which solution will meet these requirements with the LEAST operational overhead?
Create a new S3 bucket that has server-side encryption with customer-provided keys (SSE-C) as the encryption type. Copy the existing objects to the new S3 bucket. Specify SSE-C.
Create a new S3 bucket that has server-side encryption with Amazon S3 managed keys (SSE-S3) as the encryption type. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Specify SSE-S3.
Use AWS CloudHSM to store the encryption keys. Create a new S3 bucket. Use S3 Batch Operations to copy the existing objects to the new S3 bucket. Encrypt the objects by using the keys from CloudHSM.
Use the S3 Intelligent-Tiering storage class for the S3 bucket. Create an S3 Intelligent-Tiering archive configuration to transition objects that are not accessed for 90 days to S3 Glacier Deep Archive.
Explanations:
Using SSE-C would require managing the encryption keys manually, which could increase operational overhead and complexity. Additionally, copying objects to a new bucket may not significantly reduce KMS costs compared to SSE-KMS.
Switching to SSE-S3 eliminates the need for KMS keys, as S3 manages the keys internally. This change reduces KMS request costs significantly. Using S3 Batch Operations to copy objects allows for efficient handling of the transition with minimal operational overhead.
While CloudHSM can provide a secure way to manage encryption keys, it involves significant operational overhead, including managing the HSM cluster and the keys. This solution does not address the KMS request cost issue effectively and complicates the encryption process.
S3 Intelligent-Tiering would help manage costs associated with infrequently accessed objects, but it does not directly address the KMS cost issue related to frequent access to S3 Standard objects. Moreover, transitioning to S3 Glacier Deep Archive would not be beneficial for frequently accessed data, and may complicate access patterns.