Which approach should the ML specialist use to determine the ideal data transformations for the model?
Add an Amazon SageMaker Debugger hook to the script to capture key metrics. Run the script as an AWS Glue job.
Add an Amazon SageMaker Experiments tracker to the script to capture key metrics. Run the script as an AWS Glue job.
Add an Amazon SageMaker Debugger hook to the script to capture key parameters. Run the script as a SageMaker processing job.
Add an Amazon SageMaker Experiments tracker to the script to capture key parameters. Run the script as a SageMaker processing job.
Explanations:
Amazon SageMaker Debugger is used for debugging and monitoring training jobs, but it is not designed for capturing key metrics related to data transformations in an AWS Glue job. AWS Glue is typically used for ETL tasks, and the Debugger hook is better suited for training, not preprocessing data.
While Amazon SageMaker Experiments is used to track experiments, it is more suited for tracking the training and evaluation of models, rather than tracking data transformations within an AWS Glue job. AWS Glue is for ETL, not model training or experiment tracking.
SageMaker Debugger hooks are used for capturing metrics during training jobs, not for data preparation or transformation. SageMaker Processing is a better fit for evaluating data transformations, but Debugger hooks are not ideal for tracking parameters during data prep tasks.
Amazon SageMaker Experiments is the appropriate tool for tracking and managing experiments, including key parameters such as features and sample counts. SageMaker Processing jobs are designed for preprocessing tasks, making this option the best fit for evaluating data transformations and their impact on model performance.