Which solution meets these requirements?
Export the database to a .csv file with two columns: claim_label and claim_text. Use the Amazon SageMaker Object2Vec algorithm and the .csv file to train a model. Use SageMaker to deploy the model to an inference endpoint. Develop a service in the application to use the inference endpoint to process incoming claims, predict the labels, and route the claims to the appropriate queue.
Export the database to a .csv file with one column: claim_text. Use the Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm and the .csv file to train a model. Use the LDA algorithm to detect labels automatically. Use SageMaker to deploy the model to an inference endpoint. Develop a service in the application to use the inference endpoint to process incoming claims, predict the labels, and route the claims to the appropriate queue.
Use Amazon Textract to process the database and automatically detect two columns: claim_label and claim_text. Use Amazon Comprehend custom classification and the extracted information to train the custom classifier. Develop a service in the application to use the Amazon Comprehend API to process incoming claims, predict the labels, and route the claims to the appropriate queue.
Export the database to a .csv file with two columns: claim_label and claim_text. Use Amazon Comprehend custom classification and the .csv file to train the custom classifier. Develop a service in the application to use the Amazon Comprehend API to process incoming claims, predict the labels, and route the claims to the appropriate queue.
Explanations:
SageMaker Object2Vec is primarily used for representing words as vectors. It requires ML expertise to preprocess the data and fine-tune models. This solution is more complex and not suitable for a company without an ML team.
The Latent Dirichlet Allocation (LDA) algorithm is unsupervised and used for topic modeling, not for classification tasks. The company requires automatic classification, not topic detection, so this solution does not meet the needs.
Amazon Textract is used for extracting text from documents, not for categorizing claims. It is unnecessary in this case since the claims are already structured in the database, and Comprehend custom classification would suffice.
Amazon Comprehend custom classification allows the company to train a model for claim categorization without requiring ML expertise. This solution directly uses the structured data in the database to create a classifier and integrate it into the application.