

# Amazon SageMaker Experiments in Studio Classic
<a name="experiments"></a>

**Important**  
Experiment tracking using the SageMaker Experiments Python SDK is only available in Studio Classic. We recommend using the new Studio experience and creating experiments using the latest SageMaker AI integrations with MLflow. There is no MLflow UI integration with Studio Classic. If you want to use MLflow with Studio, you must launch the MLflow UI using the AWS CLI. For more information, see [Launch the MLflow UI using the AWS CLI](mlflow-launch-ui.md#mlflow-launch-ui-cli).

Amazon SageMaker Experiments Classic is a capability of Amazon SageMaker AI that lets you create, manage, analyze, and compare your machine learning experiments in Studio Classic. Use SageMaker Experiments to view, manage, analyze, and compare both custom experiments that you programmatically create and experiments automatically created from SageMaker AI jobs. 

Experiments Classic automatically tracks the inputs, parameters, configurations, and results of your iterations as *runs*. You can assign, group, and organize these runs into *experiments*. SageMaker Experiments is integrated with Amazon SageMaker Studio Classic, providing a visual interface to browse your active and past experiments, compare runs on key performance metrics, and identify the best performing models. SageMaker Experiments tracks all of the steps and artifacts that went into creating a model, and you can quickly revisit the origins of a model when you are troubleshooting issues in production, or auditing your models for compliance verifications.

## Migrate from Experiments Classic to Amazon SageMaker AI with MLflow
<a name="experiments-mlflow-migration"></a>

Past experiments created using Experiments Classic are still available to view in Studio Classic. If you want to maintain and use past experiment code with MLflow, you must update your training code to use the MLflow SDK and run the training experiments again. For more information on getting started with the MLflow SDK and the AWS MLflow plugin, see [Integrate MLflow with your environment](mlflow-track-experiments.md).

# Example notebooks for Experiments Classic
<a name="experiments-examples"></a>

The following example notebooks demonstrate how to track runs for various model training experiments. You can view the resulting experiments in Studio Classic after running the notebooks. For a tutorial that showcases additional features of Studio Classic, see [Amazon SageMaker Studio Classic Tour](gs-studio-end-to-end.md).

## Track experiments in a notebook environment
<a name="experiments-tutorials-notebooks"></a>

To learn more about tracking experiments in a notebook environment, see the following example notebooks:
+ [Track an experiment while training a Keras model locally](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/local_experiment_tracking/keras_experiment.html)
+ [Track an experiment while training a Pytorch model locally or in your notebook](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/local_experiment_tracking/pytorch_experiment.html)

## Track bias and explainability for your experiments with SageMaker Clarify
<a name="experiments-tutorials-clarify"></a>

For a step-by-step guide on tracking bias and explainability for your experiments, see the following example notebook:
+ [ Fairness and Explainability with SageMaker Clarify ](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/sagemaker_clarify_integration/tracking_bias_explainability.html)

## Track experiments for SageMaker training jobs using script mode
<a name="experiments-tutorials-scripts"></a>

For more information about tracking experiments for SageMaker training jobs, see the following example notebooks:
+ [Run a SageMaker AI Experiment with Pytorch Distributed Data Parallel - MNIST Handwritten Digits Classification](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/sagemaker_job_tracking/pytorch_distributed_training_experiment.html)
+ [Track an experiment while training a Pytorch model with a SageMaker Training Job](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/sagemaker_job_tracking/pytorch_script_mode_training_job.html)
+ [Train a TensorFlow model with a SageMaker training job and track it using SageMaker Experiments](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-experiments/sagemaker_job_tracking/tensorflow_script_mode_training_job.html)

# View experiments and runs
<a name="experiments-view-compare"></a>

Amazon SageMaker Studio Classic provides an experiments browser that you can use to view lists of experiments and runs. You can choose one of these entities to view detailed information about the entity or choose multiple entities for comparison. You can filter the list of experiments by entity name, type, and tags.

**To view experiments and runs**

1. To view the experiment in Studio Classic, in the left sidebar, choose **Experiments**.

   Select the name of the experiment to view all associated runs. You can search experiments by typing directly into the **Search** bar or filtering for experiment type. You can also choose which columns to display in your experiment or run list.

   It might take a moment for the list to refresh and display a new experiment or experiment run. You can click **Refresh** to update the page. Your experiment list should look similar to the following:  
![\[A list of experiments in the SageMaker Experiments UI\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/experiments-classic/experiments-overview.png)

1. In the experiments list, double-click an experiment to display a list of the runs in the experiment.
**Note**  
Experiment runs that are automatically created by SageMaker AI jobs and containers are visible in the Experiments Studio Classic UI by default. To hide runs created by SageMaker AI jobs for a given experiment, choose the settings icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/icons/Settings_squid.png)) and toggle **Show jobs**.  
![\[A list of experiment runs in the SageMaker Experiments UI\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/experiments-classic/experiments-runs-overview.png)

1. Double-click a run to display information about a specific run.

   In the **Overview** pane, choose any of the following headings to see available information about each run:
   + **Metrics** – Metrics that are logged during a run.
   + **Charts** – Build your own charts to compare runs.
   + **Output artifacts** – Any resulting artifacts of the experiment run and the artifact locations in Amazon S3.
   + **Bias reports** – Pre-training or post-training bias reports generated using Clarify.
   + ** Explainability**– Explainability reports generated using Clarify.
   + **Debugs** – A list of debugger rules and any issues found.