Which solutions will meet these requirements?
(Choose two.)
Use the S3 bucket access point instead of accessing the S3 bucket directly.
Upload the files into multiple S3 buckets.
Use S3 multipart uploads.
Fetch multiple byte-ranges of an object in parallel.
Add a random prefix to each object when uploading the files.
Explanations:
S3 bucket access points are useful for simplifying access control and managing access to shared datasets across different VPCs, but they do not provide direct improvements in throughput for large file uploads or downloads.
Using multiple S3 buckets does not inherently improve throughput for uploading or downloading large files. S3 automatically scales throughput per bucket, so splitting the data across buckets is unnecessary and doesn’t address the scaling requirement.
S3 multipart uploads allow large files to be uploaded in smaller, parallel parts. This increases upload throughput and enables the company to scale file uploads more efficiently, particularly with large files.
Fetching multiple byte-ranges of an object in parallel can improve download throughput by allowing multiple EC2 instances to retrieve parts of the file simultaneously, reducing the overall time required for file transfer.
Adding random prefixes to object keys can help distribute requests across S3 partitions for better throughput when uploading data. However, this alone does not provide a complete scaling solution for file uploads or downloads. It’s only one part of an overall strategy, but it doesn’t guarantee scaling throughput in all scenarios.