Which solution will meet these requirements?
Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.
Explanations:
Amazon Rekognition is primarily for image and video analysis, not audio or speaker recognition. Therefore, it cannot be used for multiple speaker recognition.
Amazon Transcribe is designed for converting speech to text, including support for multiple speaker recognition. Storing transcripts in Amazon S3 meets the 7-year storage requirement, and Amazon Athena can be used to query the transcripts for analysis.
Amazon Translate is used for language translation, not speaker recognition. Additionally, storing transcript files in Amazon Redshift is not optimal for long-term storage and auditing.
Similar to option A, Amazon Rekognition does not support audio processing. Furthermore, Amazon Textract is for extracting text from documents, not analyzing transcript files.