Which solution will improve the computational efficiency of the models?
Use Amazon CloudWatch metrics to gain visibility into the SageMaker training weights, gradients, biases, and activation outputs. Compute the filter ranks based on the training information. Apply pruning to remove the low-ranking filters. Set new weights based on the pruned set of filters. Run a new training job with the pruned model.
Use Amazon SageMaker Ground Truth to build and run data labeling workflows. Collect a larger labeled dataset with the labelling workflows. Run a new training job that uses the new labeled data with previous training data.
Use Amazon SageMaker Debugger to gain visibility into the training weights, gradients, biases, and activation outputs. Compute the filter ranks based on the training information. Apply pruning to remove the low-ranking filters. Set the new weights based on the pruned set of filters. Run a new training job with the pruned model.
Use Amazon SageMaker Model Monitor to gain visibility into the ModelLatency metric and OverheadLatency metric of the model after the company deploys the model. Increase the model learning rate. Run a new training job.
Explanations:
While pruning can help reduce model size and improve efficiency, using CloudWatch metrics does not directly support the pruning process. Additionally, it lacks the necessary functionality to compute filter ranks and adjust weights; this process should be conducted within a training framework.
Although building a larger labeled dataset can improve model accuracy, it does not directly address the optimization of computational efficiency or reduction in resource consumption, which is the primary goal in this scenario.
Amazon SageMaker Debugger provides the necessary tools to analyze the model’s training dynamics. By gaining insights into the training metrics, the company can compute filter ranks and effectively prune low-ranking filters, thus optimizing the model for better computational efficiency.
Increasing the model’s learning rate may not improve computational efficiency and could potentially harm model accuracy. Furthermore, Model Monitor is focused on assessing model performance post-deployment rather than optimizing model structure during training.