What should the data science team do to address this issue in the MOST operationally efficient manner?
Use Amazon SageMaker Studio to rebuild the model. Create a notebook that uses the XGBoost training container to perform model training. Deploy the model at an endpoint. Enable Amazon SageMaker Model Monitor to store inferences. Use the inferences to create Shapley values that help explain model behavior. Create a chart that shows features and SHapley Additive exPlanations (SHAP) values to explain to the credit team how the features affect the model outcomes.
Use Amazon SageMaker Studio to rebuild the model. Create a notebook that uses the XGBoost training container to perform model training. Activate Amazon SageMaker Debugger, and configure it to calculate and collect Shapley values. Create a chart that shows features and SHapley Additive exPlanations (SHAP) values to explain to the credit team how the features affect the model outcomes.
Create an Amazon SageMaker notebook instance. Use the notebook instance and the XGBoost library to locally retrain the model. Use the plot_importance() method in the Python XGBoost interface to create a feature importance chart. Use that chart to explain to the credit team how the features affect the model outcomes.
Use Amazon SageMaker Studio to rebuild the model. Create a notebook that uses the XGBoost training container to perform model training. Deploy the model at an endpoint. Use Amazon SageMaker Processing to post-analyze the model and create a feature importance explainability chart automatically for the credit team.
Explanations:
While this option describes the use of Amazon SageMaker and SHAP values for explainability, it doesn’t specify the use of Amazon SageMaker Debugger for efficient post-training analysis. Therefore, it may not be the most operationally efficient option.
This option correctly integrates Amazon SageMaker Debugger, which automates the calculation and collection of Shapley values during the model training process. This makes it easier for the credit team to understand model decisions without extensive data science knowledge.
This option suggests using a local Amazon SageMaker notebook instance and theplot_importance()method. While it provides a feature importance chart, it lacks the depth of explanation provided by SHAP values, which are more informative for understanding individual predictions.
This option mentions using Amazon SageMaker Processing to create a feature importance explainability chart but lacks the detailed explanation of model behavior using SHAP values, which are essential for understanding specific credit decisions.