Which metrics should the data scientist use to meet this requirement?
(Choose two.)
InferenceLatency
Mean squared error (MSE)
Root mean squared error (RMSE)
Precision
Accuracy
Explanations:
Inference latency measures the time taken for the model to make predictions after training, not how well the model predicts delivery times. It is not a performance metric for evaluating the accuracy of predictions during model training.
Mean squared error (MSE) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. It is commonly used to evaluate regression models like the one being built for predicting delivery times.
Root mean squared error (RMSE) is the square root of the mean of the squared errors, providing a measure of how well the model’s predictions match the actual delivery times. RMSE is particularly useful because it is in the same unit as the target variable, making it easier to interpret.
Precision is a classification metric that measures the proportion of true positive results in relation to the total predicted positives. Since the model in question is a regression model predicting continuous values (delivery times), precision is not applicable.
Accuracy is another classification metric that measures the proportion of correct predictions out of total predictions made. Like precision, accuracy is not suitable for evaluating a regression model, which predicts continuous numerical values rather than discrete classes.