Which sequence of steps should the data scientist take to meet these requirements?
Apply random sampling to the dataset. Then split the dataset into training, validation, and test sets.
Split the dataset into training, validation, and test sets. Then rescale the training set and apply the same scaling to the validation and test sets.
Rescale the dataset. Then split the dataset into training, validation, and test sets.
Split the dataset into training, validation, and test sets. Then rescale the training set, the validation set, and the test set independently.
Explanations:
Random sampling is not necessary to address the issue of differing statistical dispersion between features. The focus should be on proper scaling of the features.
Splitting the dataset first ensures that the model is evaluated on unseen data. Rescaling only the training set avoids data leakage, and applying the same scaling to the validation and test sets ensures consistency in preprocessing.
Rescaling before splitting the data could result in information leakage, as scaling would be applied globally to the entire dataset, causing the model to learn from information that would not be available in a real-world scenario.
Rescaling the training, validation, and test sets independently may lead to inconsistencies in scaling, which could negatively affect the model’s performance. The training set should be used to determine the scaling parameters, which should then be applied to the validation and test sets.