

# Amazon SageMaker Studio
<a name="studio-updated"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

 Amazon SageMaker Studio is the latest web-based experience for running ML workflows. Studio offers a suite of integrated development environments (IDEs). These include Code Editor, based on Code-OSS, Visual Studio Code - Open Source, a new JupyterLab application, RStudio, and Amazon SageMaker Studio Classic. For more information, see [Applications supported in Amazon SageMaker Studio](studio-updated-apps.md). 

The new web-based UI in Studio is faster and provides access to all SageMaker AI resources, including jobs and endpoints, in one interface. ML practitioners can also choose their preferred IDE to accelerate ML development. A data scientist can use JupyterLab to explore data and tune models. In addition, a machine learning operations (MLOps) engineer can use Code Editor with the pipelines tool in Studio to deploy and monitor models in production. 

 The previous Studio experience is still being supported as Amazon SageMaker Studio Classic. Studio Classic is the default experience for existing customers, and is available as an application in Studio. For more information about Studio Classic, see [Amazon SageMaker Studio Classic](studio.md). For information about how to migrate from Studio Classic to Studio, see [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md). 

 Studio offers the following benefits: 
+ A new JupyterLab application that has a faster start-up time and is more reliable than the existing Studio Classic application. For more information, see [SageMaker JupyterLab](studio-updated-jl.md).
+ A suite of IDEs that open in a separate tab, including the new Code Editor, based on Code-OSS, Visual Studio Code - Open Source application. Users can interact with supported IDEs in a full screen experience. For more information, see [Applications supported in Amazon SageMaker Studio](studio-updated-apps.md).
+ Access to all of your SageMaker AI resources in one place. Studio displays running instances across all of your applications.  
+ Access to all training jobs in a single view, regardless of whether they were scheduled from notebooks or initiated from Amazon SageMaker JumpStart.
+ Simplified model deployment workflows and endpoint management and monitoring directly from Studio. You don't need to access the SageMaker AI console. 
+ Automatic creation of all configured applications when you onboard to a domain. For information about onboarding to a domain, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).
+ An improved JumpStart experience where you can discover, import, register, fine tune, and deploy a foundation model. For more information, see [SageMaker JumpStart pretrained models](studio-jumpstart.md).

**Topics**
+ [Launch Amazon SageMaker Studio](studio-updated-launch.md)
+ [Amazon SageMaker Studio UI overview](studio-updated-ui.md)
+ [Amazon EFS auto-mounting in Studio](studio-updated-automount.md)
+ [Idle shutdown](studio-updated-idle-shutdown.md)
+ [Applications supported in Amazon SageMaker Studio](studio-updated-apps.md)
+ [Connect your Remote IDE to SageMaker spaces with remote access](remote-access.md)
+ [Bring your own image (BYOI)](studio-updated-byoi.md)
+ [Lifecycle configurations within Amazon SageMaker Studio](studio-lifecycle-configurations.md)
+ [Amazon SageMaker Studio spaces](studio-updated-spaces.md)
+ [Trusted identity propagation with Studio](trustedidentitypropagation.md)
+ [Perform common UI tasks](studio-updated-common.md)
+ [NVMe stores with Amazon SageMaker Studio](studio-updated-nvme.md)
+ [Local mode support in Amazon SageMaker Studio](studio-updated-local.md)
+ [View your Studio running instances, applications, and spaces](studio-updated-running.md)
+ [Stop and delete your Studio running applications and spaces](studio-updated-running-stop.md)
+ [SageMaker Studio image support policy](sagemaker-distribution.md)
+ [Amazon SageMaker Studio pricing](studio-updated-cost.md)
+ [Troubleshooting](studio-updated-troubleshooting.md)
+ [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md)
+ [Amazon SageMaker Studio Classic](studio.md)

# Launch Amazon SageMaker Studio
<a name="studio-updated-launch"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

 This page's topics demonstrate how to launch Amazon SageMaker Studio from the Amazon SageMaker AI console and the AWS Command Line Interface (AWS CLI). 

**Topics**
+ [Prerequisites](#studio-updated-launch-prereq)
+ [Launch from the Amazon SageMaker AI console](#studio-updated-launch-console)
+ [Launch using the AWS CLI](#studio-updated-launch-cli)

## Prerequisites
<a name="studio-updated-launch-prereq"></a>

 Before you begin, complete the following prerequisites: 
+ Onboard to a SageMaker AI domain with Studio access. If you don't have permissions to set Studio as the default experience for your domain, contact your administrator. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md). 
+ Update the AWS CLI by following the steps in [Installing the current AWS CLI Version](https://docs.aws.amazon.com//cli/latest/userguide/install-cliv1.html#install-tool-bundled). 
+ From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com//general/latest/gr/aws-sec-cred-types.html).

## Launch from the Amazon SageMaker AI console
<a name="studio-updated-launch-console"></a>

Complete the following procedure to launch Studio from the Amazon SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1.  From the left navigation pane, choose Studio. 

1.  From the Studio landing page, select the domain and user profile for launching Studio. 

1.  Choose **Open Studio**. 

1.  To launch Studio, choose **Launch personal Studio**. 

## Launch using the AWS CLI
<a name="studio-updated-launch-cli"></a>

This section demonstrates how to launch Studio using the AWS CLI. The procedure to access Studio using the AWS CLI depends if the domain uses AWS Identity and Access Management (IAM) authentication or AWS IAM Identity Center authentication. You can use the AWS CLI to launch Studio by creating a presigned domain URL when your domain uses IAM authentication. For information about launching Studio with IAM Identity Center authentication, see [Use custom setup for Amazon SageMaker AI](onboard-custom.md). 

### Launch if Studio is the default experience
<a name="studio-updated-launch-console-updated"></a>

 The following code snippet demonstrates how to launch Studio from the AWS CLI using a presigned domain URL if Studio is the default experience. For more information, see [create-presigned-domain-url](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-presigned-domain-url.html). 

```
aws sagemaker create-presigned-domain-url \
--region region \
--domain-id domain-id \
--user-profile-name user-profile-name \
--session-expiration-duration-in-seconds 43200
```

### Launch if Amazon SageMaker Studio Classic is your default experience
<a name="studio-updated-launch-console-classic"></a>

 The following code snippet demonstrates how to launch Studio from the AWS CLI using a presigned domain URL if Studio Classic is the default experience. For more information, see [create-presigned-domain-url](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-presigned-domain-url.html). 

```
aws sagemaker create-presigned-domain-url \
--region region \
--domain-id domain-id \
--user-profile-name user-profile-name \
--session-expiration-duration-in-seconds 43200 \
--landing-uri studio::
```

# Amazon SageMaker Studio UI overview
<a name="studio-updated-ui"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

 The Amazon SageMaker Studio user interface is split into three distinct parts. This page gives information about the distinct parts and their components. 
+  **Navigation bar**– This section of the UI includes the URL, breadcrumbs, notifications, and user options. 
+  **Navigation pane**– This section of the UI includes a list of the applications that are supported in Studio and options for the main workflows in Studio. 
+  **Content pane**– The main working area that displays the current page of the Studio UI that you have open.

![\[Amazon SageMaker Studio home page with navigation pane and content pane (main working area).\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/monarch/studio-updated-ui.png)


**Topics**
+ [Amazon SageMaker Studio navigation bar](#studio-updated-ui-top)
+ [Amazon SageMaker Studio navigation pane](#studio-updated-ui-left)
+ [Studio content pane](#studio-updated-ui-working)

## Amazon SageMaker Studio navigation bar
<a name="studio-updated-ui-top"></a>

 The navigation bar of the Studio UI includes the URL, breadcrumbs, notifications, and user options. 

 **URL Structure** 

 The URL of Studio changes as you navigate the UI. When you navigate to a different page in the UI, the URL changes to reflect that page. With the updated URL, you open any page in the Studio UI directly without navigating to the landing page first. 

 **Breadcrumbs** 

 As you navigate through the Studio UI, the breadcrumbs keep track of the parent pages of the current page. By choosing one of these breadcrumbs, you can navigate to parent pages in the UI. 

 **Notifications** 

 The notifications section of the UI gives information about important changes to Studio, updates to applications, and issues to resolve. 

 **User options** 

Choose the user options icon (![\[User icon with a circular avatar placeholder and a downward-pointing arrow.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/monarch/user-settings.png)) to get information about the user profile that is currently using Studio, and gives the option to sign out of Studio.  

## Amazon SageMaker Studio navigation pane
<a name="studio-updated-ui-left"></a>

 **Navigation pane** 

 The navigation pane of the UI includes a list of the applications that are supported in Studio. It also provides options for the main workflows in Studio. 

 This section of the UI can be used in an expanded or collapsed state. To change whether the section is expanded or collapsed, select the **Collapse** icon (![\[Square icon with "ID" text representing an identity or identification concept.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/monarch/collapse-ui.png)). 

 **Applications** 

 The applications section lists the applications that are available in Studio. If you choose one of the application types, you are directed to the landing page for that application. 

 **Workflows** 

 The list of workflows includes all of the available actions that you can take in Studio. Choose one of the options to navigate to the landing page for that workflow. If there are multiple workflows available for that option, choosing the option opens a dropdown menu where you can select the desired landing page. 

 The following list describes the options and provides a link for more information. 
+  **Home**– The main landing page with an overview, getting started, and what’s new. 
+  **Running instances**– All of the instances that are currently running in Studio. For more information, see [View your Studio running instances, applications, and spaces](studio-updated-running.md). 
+  **Data**– Data preparation options where you can collaborate to store, explore, prepare, transform, and share your data.  
  +  For more information about Amazon SageMaker Data Wrangler, see [Data preparation](canvas-data-prep.md). 
  +  For more information about Amazon SageMaker Feature Store, see [Create, store, and share features with Feature Store](feature-store.md). 
  +  For more information about Amazon EMR clusters, see [Data preparation using Amazon EMR](studio-notebooks-emr-cluster.md). 
+  **Auto ML**– Automatically build, train, tune, and deploy machine learning (ML) models. For more information, see [Amazon SageMaker Canvas](canvas.md). 
+  **Experiments**– Create, manage, analyze, and compare your machine learning experiments using Amazon SageMaker Experiments. For more information, see [Amazon SageMaker Experiments in Studio Classic](experiments.md). 
+  **Jobs**– View jobs created in Studio.  
  +  For more information about training, see [Model training](train-model.md). 
  +  For more information about model evaluation, see [Understand options for evaluating large language models with SageMaker Clarify](clarify-foundation-model-evaluate.md). 
+  **Pipelines**– Automate your ML workflow with Amazon SageMaker Pipelines, which provides resources to help you build, track, and manage your pipeline resources. For more information, see [Pipelines](pipelines.md).
+  **Models**– Organize your models into groups and collections in the model registry, where you can manage model versions, view metadata, and deploy models to production. For more information, see [Model Registration Deployment with Model Registry](model-registry.md).
+  **JumpStart**– Amazon SageMaker JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. For more information, see [SageMaker JumpStart pretrained models](studio-jumpstart.md). 
+  **Deployments**– Deploy your machine learning (ML) models for inference.
  +  For more information about Amazon SageMaker Inference Recommender, see [Amazon SageMaker Inference Recommender](inference-recommender.md). 
  +  For more information about endpoints, see [Deploy models for inference](deploy-model.md). 

## Studio content pane
<a name="studio-updated-ui-working"></a>

 The main working area is also called the content pane. It displays the current page of the Studio UI that you have open. 

 **Studio home page** 

 The Studio home page is the primary landing page in the main working area. The home page includes two distinct tabs. There is an **Overview** tab and a **Getting started** tab. 

 **Overview** 

 The **Overview** tab includes options to start spaces for popular application types, get started with pre-built and automated solutions for ML workflows, and links to common tasks in the Studio UI. 

 **Getting started** 

 The **Getting started** tab includes information, guidance, and resources on how to begin with Studio. This includes a guided tour of the Studio UI, a link to documentation about Studio, and a selection of quick tips. 

# Amazon EFS auto-mounting in Studio
<a name="studio-updated-automount"></a>

 Amazon SageMaker AI supports automatically mounting a folder in an Amazon EFS volume for each user in a domain. Using this folder, users can share data between their own private spaces. However, users cannot share data with other users in the domain. Users only have access to their own folder. 

 The user’s folder can be accessed through a folder named `user-default-efs` . This folder is present in the `$HOME` directory of the Studio application.

 For information about opting out of Amazon EFS auto-mounting, see [Opt out of Amazon EFS auto-mounting](studio-updated-automount-optout.md). 

 Amazon EFS auto-mounting also facilitates the migration of data from Studio Classic to Studio. For more information, see [(Optional) Migrate data from Studio Classic to Studio](studio-updated-migrate-data.md). 

 **Access point information** 

 When auto-mounting is activated, SageMaker AI uses an Amazon EFS access point to facilitate access to the data in the Amazon EFS volume. For more information about access points, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) SageMaker AI creates a unique access point for each user profile in the domain during user profile creation or during application creation for an existing user profile. The POSIX user value of the access point matches the `HomeEfsFileSystemUid` value of the user profile that SageMaker AI creates the access point for. To get the value of the user, see [DescribeUserProfile](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeUserProfile.html#sagemaker-DescribeUserProfile-response-HomeEfsFileSystemUid). The root directory path is also set to the same value as the POSIX user value.  

 SageMaker AI sets the permissions of the new directory to the following values: 

 
+  Owner user ID: `POSIX user value` 
+  Owner group ID: `0` 
+  Permissions `700` 

 The access point is required to access the Amazon EFS volume. As a result, you cannot delete or update the access point without losing access to the Amazon EFS volume. 

 **Error resolution** 

 If SageMaker AI encounters an issue when auto-mounting the Amazon EFS user folder during application creation, the application is still created. However, in this case, SageMaker AI creates a file named `error.txt` instead of mounting the Amazon EFS folder. This file describes the error encountered, as well as steps to resolve it. SageMaker AI creates the `error.txt` file in the `user-default-efs` folder located in the `$HOME` directory of the application. 

# Opt out of Amazon EFS auto-mounting
<a name="studio-updated-automount-optout"></a>

 You can opt-out of Amazon SageMaker AI auto-mounting Amazon EFS user folders during domain and user profile creation or for an existing domain or user profile. 

## Opt out during domain creation
<a name="studio-updated-automount-optout-domain-creation"></a>

 You can opt out of Amazon EFS auto-mounting when creating a domain using either the console or the AWS Command Line Interface. 

### Console
<a name="studio-updated-automount-optout-domain-creation-console"></a>

Complete the following steps to opt out of Amazon EFS auto-mounting when creating a domain from the console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1.  Complete the steps in [Use custom setup for Amazon SageMaker AI](onboard-custom.md) with the following modification to set up a domain. 
   +  On the **Configure storage** step, turn off **Automatically mount EFS storage and data**. 

### AWS CLI
<a name="studio-updated-automount-optout-domain-creation-cli"></a>

 Use the following command to opt out of Amazon EFS auto-mounting during domain creation using the AWS CLI. For more information about creating a domain using the AWS CLI, see [Use custom setup for Amazon SageMaker AI](onboard-custom.md).

```
aws --region region sagemaker create-domain \
--domain-name "my-domain-$(date +%s)" \
--vpc-id default-vpc-id \
--subnet-ids subnet-ids \
--auth-mode IAM \
--default-user-settings "ExecutionRole=execution-role-arn,AutoMountHomeEFS=Disabled" \
--default-space-settings "ExecutionRole=execution-role-arn"
```

## Opt out for an existing domain
<a name="studio-updated-automount-optout-domain-existing"></a>

 You can opt out of Amazon EFS auto-mounting for an existing domain using either the console or the AWS CLI. 

### Console
<a name="studio-updated-automount-optout-domain-existing-console"></a>

 Complete the following steps to opt out of Amazon EFS auto-mounting when updating a domain from the console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1.  On the left navigation under **Admin configurations**, choose **Domains**. 

1.  On the **Domains** page, select the domain that you want to opt out of Amazon EFS auto-mounting for. 

1.  On the **Domain details** page, select the **Domain settings** tab. 

1.  Navigate to the **Storage configurations** section. 

1.  Select **Edit**. 

1.  From the **Edit storage settings** page, turn off **Automatically mount EFS storage and data**. 

1.  Select **Submit**.

### AWS CLI
<a name="studio-updated-automount-optout-domain-existing-cli"></a>

 Use the following command to opt out of Amazon EFS auto-mounting while updating an existing domain using the AWS CLI. 

```
aws --region region sagemaker update-domain \
--domain-id domain-id \
--default-user-settings "AutoMountHomeEFS=Disabled"
```

## Opt out during user profile creation
<a name="studio-updated-automount-optout-user-creation"></a>

 You can opt out of Amazon EFS auto-mounting when creating a user profile using either the console or the AWS CLI. 

### Console
<a name="studio-updated-automount-optout-user-creation-console"></a>

 Complete the following steps to opt out of Amazon EFS auto-mounting when creating a user profile from the console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1.  Complete the steps in [Add user profiles](domain-user-profile-add.md) with the following modification to create a user profile. 
   +  On the **Data and Storage** step, turn off **Inherit settings from domain**. This allows the user to have a different value than the defaults that are set for the domain.  
   +  Turn off **Automatically mount EFS storage and data**. 

### AWS CLI
<a name="studio-updated-automount-optout-user-creation-cli"></a>

 Use the following command to opt out of Amazon EFS auto-mounting during user profile creation using the AWS CLI. For more information about creating a user profile using the AWS CLI, see [Add user profiles](domain-user-profile-add.md).

```
aws --region region sagemaker create-user-profile \
--domain-id domain-id \
--user-profile-name "user-profile-$(date +%s)" \
--user-settings "ExecutionRole=arn:aws:iam::account-id:role/execution-role-name,AutoMountHomeEFS=Enabled/Disabled/DefaultAsDomain"
```

## Opt out for an existing user profile
<a name="studio-updated-automount-optout-user-existing"></a>

 You can opt out of Amazon EFS auto-mounting for an existing user profile using either the console or the AWS CLI. 

### Console
<a name="studio-updated-automount-optout-user-existing-console"></a>

 Complete the following steps to opt out of Amazon EFS auto-mounting when updating a user profile from the console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1.  On the left navigation under **Admin configurations**, choose **Domains**. 

1.  On the **Domains** page, select the domain containing the user profile that you want to opt out of Amazon EFS auto-mounting for. 

1.  On the **Domains details** page, select the **User profiles** tab. 

1.  Select the user profile to update. 

1.  From the **User Details** tab, navigate to the **AutoMountHomeEFS** section. 

1.  Select **Edit**. 

1.  From the **Edit storage settings** page, turn off **Inherit settings from domain**. This allows the user to have a different value than the defaults that are set for the domain.  

1.  Turn off **Automatically mount EFS storage and data**. 

1.  Select **Submit**. 

### AWS CLI
<a name="studio-updated-automount-optout-user-existing-cli"></a>

 Use the following command to opt out of Amazon EFS auto-mounting while updating an existing user profile using the AWS CLI. 

```
aws --region region sagemaker update-user-profile \
--domain-id domain-id \
--user-profile-name user-profile-name \
--user-settings "AutoMountHomeEFS=DefaultAsDomain"
```

# Idle shutdown
<a name="studio-updated-idle-shutdown"></a>

Amazon SageMaker AI supports shutting down idle resources to manage costs and prevent cost overruns due to cost accrued by idle, billable resources. It accomplishes this by detecting an app’s idle state and performing an app shutdown when idle criteria are met. 

SageMaker AI supports idle shutdown for the following applications. Idle shutdown must be set for each application type independently. 
+  JupyterLab 
+  Code Editor, based on Code-OSS, Visual Studio Code - Open Source 

 Idle shutdown can be set at either the domain or user profile level. When idle shutdown is set at the domain level, the idle shutdown settings apply to all applications created in the domain. When set at the user profile level, the idle shutdown settings apply only to the specific users that they are set for. User profile settings override domain settings.  

**Note**  
Idle shutdown requires the usage of the `SageMaker-distribution` (SMD) image with v2.0 or newer. Domains using an older SMD version can’t use the feature. These users must use an LCC to manage auto-shutdown instead. 

## Definition of idle
<a name="studio-updated-idle-shutdown-definition"></a>

 Idle shutdown settings only apply when the application becomes idle with no jobs running. SageMaker AI doesn’t start the idle shutdown timing until the instance becomes idle. The definition on idle differs based on whether the application type is JupyterLab or Code Editor. 

 For JupyterLab applications, the instance is considered idle when the following conditions are met: 
+  No active Jupyter kernel sessions 
+  No active Jupyter terminal sessions 

 For Code Editor applications, the instance is considered idle when the following conditions are met: 
+  No text file or notebook changes 
+  No files being viewed 
+  No interaction with the terminal

# Set up idle shutdown
<a name="studio-updated-idle-shutdown-setup"></a>

 The following sections show how to set up idle shutdown from either the console or using the AWS CLI. Idle shutdown can be set at either the domain or user profile level. 

## Prerequisites
<a name="studio-updated-idle-shutdown-setup-prereq"></a>

 To use idle shutdown with your application, you must complete the following prerequisites. 
+ Ensure that your application is using the SageMaker Distribution (SMD) version 2.0. You can select this version during application creation or update the image version of the application after creation. For more information, see [Update the SageMaker Distribution Image](studio-updated-jl-update-distribution-image.md) . 
+ For applications built with custom images, idle shutdown is supported if your custom image is created with SageMaker Distribution (SMD) version 2.0 or later as the base image. If the custom image is created with a different base image, then you must install the [jupyter-activity-monitor-extension >= 0.3.1](https://anaconda.org/conda-forge/jupyter-activity-monitor-extension) extension on the image and attach the image to your Amazon SageMaker AI domain for JupyterLab applications. For more information about custom images, see [Bring your own image (BYOI)](studio-updated-byoi.md).

## From the Console
<a name="studio-updated-idle-shutdown-setup-console"></a>

 The following sections show how to enable idle shutdown from the console. 

### Add when creating a new domain
<a name="studio-updated-idle-shutdown-setup-console-new-domain"></a>

1. Create a domain by following the steps in [Use custom setup for Amazon SageMaker AI](onboard-custom.md) 

1.  When configuring the application settings in the domain, navigate to either the Code Editor or JupyterLab section.  

1.  Select **Enable idle shutdown**. 

1.  Enter a default idle shutdown time in minutes. This values defaults to `10,080` if no value is entered. 

1.  (Optional) Select **Allow users to set custom idle shutdown time** to allow users to modify the idle shutdown time. 
   +  Enter a maximum value that users can set the default idle shutdown time to. You must enter a maximum value. The minimum value is set by Amazon SageMaker AI and must be `60`. 

### Add to an existing domain
<a name="studio-updated-idle-shutdown-setup-console-existing-domain"></a>

**Note**  
If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

1.  Navigate to the domain. 

1.  Choose the **App Configurations** tab. 

1.  From the **App Configurations** tab, navigate to either the Code Editor or JupyterLab section. 

1.  Select **Edit**. 

1.  Select **Enable idle shutdown**. 

1.  Enter a default idle shutdown time in minutes. This values defaults to `10,080` if no value is entered. 

1.  (Optional) Select **Allow users to set custom idle shutdown time** to allow users to modify the idle shutdown time. 
   +  Enter a maximum value that users can set the default idle shutdown time to. You must enter a maximum value. The minimum value is set by Amazon SageMaker AI and must be `60`. 

1.  Select **Submit**. 

### Add when creating a new user profile
<a name="studio-updated-idle-shutdown-setup-console-new-userprofile"></a>

1. Add a user profile by following the steps at [Add user profiles](domain-user-profile-add.md) 

1.  When configuring the application settings for the user profile, navigate to either the Code Editor or JupyterLab section. 

1.  Select **Enable idle shutdown**. 

1.  Enter a default idle shutdown time in minutes. This values defaults to `10,080` if no value is entered. 

1.  (Optional) Select **Allow users to set custom idle shutdown time** to allow users to modify the idle shutdown time. 
   +  Enter a maximum value that users can set the default idle shutdown time to. You must enter a maximum value. The minimum value is set by Amazon SageMaker AI and must be `60`. 

1.  Select “Save Changes”. 

### Add to an existing user profile
<a name="studio-updated-idle-shutdown-setup-console-existing-userprofile"></a>

 Note: If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

1.  Navigate to the user profile. 

1.  Choose the **App Configurations** tab. 

1.  From the ****App Configurations**** tab, navigate to either the Code Editor or JupyterLab section.  

1.  Select **Edit**. 

1.  Idle shutdown settings will show domain settings by default if configured for the domain. 

1.  Select **Enable idle shutdown**. 

1.  Enter a default idle shutdown time in minutes. This values defaults to `10,080` if no value is entered. 

1.  (Optional) Select **Allow users to set custom idle shutdown time** to allow users to modify the idle shutdown time. 
   +  Enter a maximum value that users can set the default idle shutdown time to. You must enter a maximum value. The minimum value is set by Amazon SageMaker AI and must be `60`. 

1.  Select **Save Changes**. 

## From the AWS CLI
<a name="studio-updated-idle-shutdown-setup-cli"></a>

 The following sections show how to enable idle shutdown using the AWS CLI. 

**Note**  
To enforce a specific timeout value from the AWS CLI, you must set `IdleTimeoutInMinutes`, `MaxIdleTimeoutInMinutes`, and `MinIdleTimeoutInMinutes` to the same value.

### Domain
<a name="studio-updated-idle-shutdown-setup-cli-domain"></a>

 The following command shows how to enable idle shutdown when updating an existing domain. To add idle shutdown for a new domain, use the `create-domain` command instead. 

**Note**  
If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

```
aws sagemaker update-domain --region region --domain-id domain-id \
--default-user-settings file://default-user-settings.json

## default-user-settings.json example for enforcing the default timeout
{
    "JupyterLabAppSettings": {
        "AppLifecycleManagement": {
            "IdleSettings": {
                "LifecycleManagement": "ENABLED",
                "IdleTimeoutInMinutes": 120,
                "MaxIdleTimeoutInMinutes": 120,
                "MinIdleTimeoutInMinutes": 120
        }
    }
}

## default-user-settings.json example for letting users customize the default timeout, between 2-5 hours
{
    "JupyterLabAppSettings": {
        "AppLifecycleManagement": {
            "IdleSettings": {
                "LifecycleManagement": "ENABLED",
                "IdleTimeoutInMinutes": 120,
                "MinIdleTimeoutInMinutes": 120,
                "MaxIdleTimeoutInMinutes": 300
        }
    }
}
```

### User profile
<a name="studio-updated-idle-shutdown-setup-cli-userprofile"></a>

 The following command shows how to enable idle shutdown when updating an existing user profile. To add idle shutdown for a new user profile, use the `create-user-profile` command instead. 

**Note**  
If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

```
aws sagemaker update-user-profile --region region --domain-id domain-id \
--user-profile-name user-profile-name --user-settings file://user-settings.json

## user-settings.json example for enforcing the default timeout
{
    "JupyterLabAppSettings": {
        "AppLifecycleManagement": {
            "IdleSettings": {
                "LifecycleManagement": "ENABLED",
                "IdleTimeoutInMinutes": 120,
                "MaxIdleTimeoutInMinutes": 120,
                "MinIdleTimeoutInMinutes": 120
        }
    }
}

## user-settings.json example for letting users customize the default timeout, between 2-5 hours
{
    "JupyterLabAppSettings": {
        "AppLifecycleManagement": {
            "IdleSettings": {
                "LifecycleManagement": "ENABLED",
                "IdleTimeoutInMinutes": 120,
                "MinIdleTimeoutInMinutes": 120,
                "MaxIdleTimeoutInMinutes": 300
        }
    }
}
```

# Update default idle shutdown settings
<a name="studio-updated-idle-shutdown-update"></a>

 You can update the default idle shutdown settings at either the domain or user profile level. 

**Note**  
If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

## Update domain settings
<a name="studio-updated-idle-shutdown-update-domain"></a>

1.  Navigate to the domain. 

1.  Choose the **App Configurations** tab. 

1.  From the **App Configurations** tab, navigate to either the Code Editor or JupyterLab section.  

1.  In the section for the application that you want to modify the idle shutdown time limit for, select **Edit**. 

1.  Update the idle shutdown settings for the domain. 

1.  Select **Save Changes**. 

## Update user profile settings
<a name="studio-updated-idle-shutdown-update-userprofile"></a>

1.  Navigate to the domain. 

1.  Choose the **User profiles** tab. 

1.  From the **User profiles** tab, select the user profile to edit. 

1.  From the **User profile** page, choose the **Applications** tab. 

1.  On the **Applications** tab, navigate to either the Code Editor or JupyterLab section.  

1.  In the section for the application that you want to modify the idle shutdown time limit for, select **Edit**. 

1.  Update the idle shutdown settings for the user profile. 

1.  Select **Save Changes**. 

# Modify your idle shutdown time limit
<a name="studio-updated-idle-shutdown-modify"></a>

 Users may be able to modify the idle shutdown time limit if the admin gives access when adding support for idle shutdown. If support for idle shutdown is added, there may be a limit applied to the maximum time for idle shutdown. A user can set the value anywhere between the lower limit and upper limit. 

1.  Launch Amazon SageMaker Studio by following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1.  From the **Applications** section, select the application type to update the idle shutdown time for. 

1.  Select the space to update. 

1.  Update **Idle shutdown (mins)** with your desired value. 
**Note**  
If idle shutdown is set when applications are running, they must be restarted for idle shutdown settings to take effect. 

# Applications supported in Amazon SageMaker Studio
<a name="studio-updated-apps"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

 Amazon SageMaker Studio supports the following applications: 
+  **Code Editor, based on Code-OSS, Visual Studio Code - Open Source**– Code Editor offers a lightweight and powerful integrated development environment (IDE) with familiar shortcuts, terminal, and advanced debugging capabilities and refactoring tools. It is a fully managed, browser-based application in Studio. For more information, see [Code Editor in Amazon SageMaker Studio](code-editor.md). 
+  **Amazon SageMaker Studio Classic**– Amazon SageMaker Studio Classic is a web-based IDE for machine learning. With Studio Classic, you can build, train, debug, deploy, and monitor your machine learning models. For more information, see [Amazon SageMaker Studio Classic](studio.md). 
+  **JupyterLab**–JupyterLab offers a set of capabilities that augment the fully managed notebook offering. It includes kernels that start in seconds, a pre-configured runtime with popular data science, machine learning frameworks, and high performance block storage. For more information, see [SageMaker JupyterLab](studio-updated-jl.md). 
+  **Amazon SageMaker Canvas**– With SageMaker Canvas, you can use machine learning to generate predictions without writing code. With Canvas, you can chat with popular large language models (LLMs), access ready-to-use models, or build a custom model that's trained on your data. For more information, see [Amazon SageMaker Canvas](canvas.md). 
+  **RStudio**– RStudio is an integrated development environment for R. It includes a console and syntax-highlighting editor that supports running code directly. It also includes tools for plotting, history, debugging, and workspace management. For more information, see [RStudio on Amazon SageMaker AI](rstudio.md). 

# Connect your Remote IDE to SageMaker spaces with remote access
<a name="remote-access"></a>

You can remotely connect from your Remote IDE to Amazon SageMaker Studio spaces. You can use your customized local IDE setup, including AI-assisted development tools and custom extensions, with the scalable compute resources in Amazon SageMaker AI. This guide provides concepts and setup instructions for administrators and users.

A Remote IDE connection establishes a secure connection between your local IDE and SageMaker spaces. This connection lets you:
+ **Access SageMaker AI compute resources** — Run code on scalable SageMaker AI infrastructure from your local environment
+ **Maintain security boundaries** — Work within the same security framework as SageMaker AI
+ **Keep your familiar IDE experience** — Use compatible local extensions, themes, and configurations that support remote development

**Note**  
Not all IDE extensions are compatible with remote development. Extensions that require local GUI components, have architecture dependencies, or need specific client-server interactions may not work properly in the remote environment. Verify that your required extensions support remote development before use.

**Topics**
+ [Key concepts](#remote-access-key-concepts)
+ [Connection methods](#remote-access-connection-methods)
+ [Supported IDEs](#remote-access-supported-ides)
+ [IDE version requirements](#remote-access-ide-version-requirements)
+ [Operating system requirements](#remote-access-os-requirements)
+ [Local machine prerequisites](#remote-access-local-prerequisites)
+ [Image requirements](#remote-access-image-requirements)
+ [Instance requirements](#remote-access-instance-requirements)
+ [Set up remote access](remote-access-remote-setup.md)
+ [Set up Remote IDE](remote-access-local-ide-setup.md)
+ [Supported AWS Regions](remote-access-supported-regions.md)

## Key concepts
<a name="remote-access-key-concepts"></a>
+ **Remote connection** — A secure tunnel between your Remote IDE and a SageMaker space. This connection enables interactive development and code execution using SageMaker AI compute resources.
+ [https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-spaces.html](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-spaces.html) — A dedicated environment within Amazon SageMaker Studio where you can manage your storage and resources for your Studio applications.
+ **Deep link** — A button (direct URL) from the SageMaker UI that initiates a remote connection to your local IDE.

## Connection methods
<a name="remote-access-connection-methods"></a>

There are three main ways to connect your Remote IDE to SageMaker spaces:
+ **Deep link access** — You can connect directly to a specific space by using the **Open space with** button available in SageMaker AI. This uses URL patterns to establish a remote connection and open your SageMaker space in your Remote IDE.
+ [https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/welcome.html](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/welcome.html) — You can authenticate with AWS Toolkit for Visual Studio Code. This allows you to connect to spaces and open a remotely connected window from your Remote IDE.
+ **SSH terminal connection** — You can connect via command line using SSH configuration.

## Supported IDEs
<a name="remote-access-supported-ides"></a>

Remote connection to Studio spaces supports:
+ [Visual Studio Code](https://code.visualstudio.com/)
+ [Kiro](https://kiro.dev/)
+ [Cursor](https://cursor.com/home)

## IDE version requirements
<a name="remote-access-ide-version-requirements"></a>

The following table lists the minimum version requirements for each supported Remote IDE.


| IDE | Minimum version | 
| --- | --- | 
|  Visual Studio Code  |  [v1.90](https://code.visualstudio.com/updates/v1_90) or greater. We recommend using the [latest stable version](https://code.visualstudio.com/updates).  | 
|  Kiro  |  v0.10.78 or greater  | 
|  Cursor  |  v2.6.18 or greater  | 

The AWS Toolkit extension is required to connect your Remote IDE to Studio spaces. For Kiro and Cursor, AWS Toolkit extension version v3.100 or greater is required.

## Operating system requirements
<a name="remote-access-os-requirements"></a>

You need one of the following operating systems to remotely connect to Studio spaces:
+ macOS 13\$1
+ Windows 10
  + [Windows 10 support ends on October 14, 2025](https://support.microsoft.com/en-us/windows/windows-10-support-ends-on-october-14-2025-2ca8b313-1946-43d3-b55c-2b95b107f281)
+ Windows 11
+ Linux
  + For VS Code, install the official [Microsoft VS Code for Linux](https://code.visualstudio.com/docs/setup/linux), not an open-source version

## Local machine prerequisites
<a name="remote-access-local-prerequisites"></a>

Before connecting your Remote IDE to Studio spaces, ensure your local machine has the required dependencies and network access.

**Important**  
Environments with software installation restrictions may prevent users from installing required dependencies. The AWS Toolkit for Visual Studio Code automatically searches for these dependencies when initiating remote connections and will prompt for installation if any are missing. Coordinate with your IT department to ensure these components are available.

**Required local dependencies**

Your local machine must have the following components installed:
+ **[Remote-SSH Extension](https://code.visualstudio.com/docs/remote/ssh)** — Remote development extension for your IDE (available in the extension marketplace for VS Code, Kiro, and Cursor)
+ **[Session Manager plugin](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)** — Required for secure session management
+ **SSH Client** — Standard component on most machines ([OpenSSH recommended for Windows](https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse))
+ **IDE CLI Command** — Typically included with IDE installation (for example, `code` for VS Code, `kiro` for Kiro, `cursor` for Cursor)

**Platform-specific requirements**
+ **Windows users** — PowerShell 5.1 or later is required for SSH terminal connections

**Network connectivity requirements**

Your local machine must have network access to [Session Manager endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html). For example, in US East (N. Virginia) (us-east-1) these can be:
+ ssm.us-east-1.amazonaws.com
+ ssm.us-east-1.api.aws
+ ssmmessages.us-east-1.amazonaws.com
+ ec2messages.us-east-1.amazonaws.com

## Image requirements
<a name="remote-access-image-requirements"></a>

**SageMaker Distribution images**

When using SageMaker Distribution with remote access, use [SageMaker Distribution](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-distribution.html) version 2.7 or later.

**Custom images**

When you [Bring your own image (BYOI)](studio-updated-byoi.md) with remote access, ensure that you follow the [custom image specifications](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-byoi-specs.html) and ensure the following dependencies are installed:
+ `curl` or `wget` — Required for downloading AWS CLI components
+ `unzip` — Required for extracting AWS CLI installation files
+ `tar` — Required for archive extraction
+ `gzip` — Required for compressed file handling

## Instance requirements
<a name="remote-access-instance-requirements"></a>
+ **Memory** — 8GB or more
+ **Instance types** — Use instances with at least 8GB of memory. The following instance types are *not* supported due to insufficient memory (less than 8GB): `ml.t3.medium`, `ml.c7i.large`, `ml.c6i.large`, `ml.c6id.large`, and `ml.c5.large`. For a more complete list of instance types, see the [Amazon EC2 On-Demand Pricing page](https://aws.amazon.com/ec2/pricing/on-demand/).

**Topics**
+ [Key concepts](#remote-access-key-concepts)
+ [Connection methods](#remote-access-connection-methods)
+ [Supported IDEs](#remote-access-supported-ides)
+ [IDE version requirements](#remote-access-ide-version-requirements)
+ [Operating system requirements](#remote-access-os-requirements)
+ [Local machine prerequisites](#remote-access-local-prerequisites)
+ [Image requirements](#remote-access-image-requirements)
+ [Instance requirements](#remote-access-instance-requirements)
+ [Set up remote access](remote-access-remote-setup.md)
+ [Set up Remote IDE](remote-access-local-ide-setup.md)
+ [Supported AWS Regions](remote-access-supported-regions.md)

# Set up remote access
<a name="remote-access-remote-setup"></a>

Before users can connect their Remote IDE to Studio spaces, the administrator must configure permissions. This section provides instructions for administrators on how to set up their Amazon SageMaker AI domain with remote access.

Different connection methods require different IAM permissions. Configure the appropriate permissions based on how your users will connect. Use the following workflow along with the permissions aligned with the connection method.

**Important**  
Currently remote IDE connections are authenticated using IAM credentials, not IAM Identity Center. This applies for domains that use the IAM Identity Center [authentication method](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-custom.html#onboard-custom-authentication-details) for your users to access the domain. If you prefer not to use IAM authentication for remote connections, you can opt-out by disabling this feature using the `RemoteAccess` conditional key in your IAM policies. For more information, see [Remote access enforcement](remote-access-remote-setup-abac.md#remote-access-remote-setup-abac-remote-access-enforcement). When using IAM credentials, Remote IDE connections may maintain active sessions even after you log out of your IAM Identity Center session. Sometimes, these Remote IDE connections can persist for up to 12 hours. To ensure the security of your environment, administrators must review session duration settings where possible and be cautious when using shared workstations or public networks.

1. Choose one of the following connection method permissions that align with your users’ [Connection methods](remote-access.md#remote-access-connection-methods).

1. [Create a custom IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) based on the connection method permission.

**Topics**
+ [Step 1: Configure security and permissions](#remote-access-remote-setup-permissions)
+ [Step 2: Enable remote access for your space](#remote-access-remote-setup-enable)
+ [Advanced access control](remote-access-remote-setup-abac.md)
+ [Set up Studio to run with subnets without internet access within a VPC](remote-access-remote-setup-vpc-subnets-without-internet-access.md)
+ [Set up automated Studio space filtering when using the AWS Toolkit](remote-access-remote-setup-filter.md)

## Step 1: Configure security and permissions
<a name="remote-access-remote-setup-permissions"></a>

**Topics**
+ [Method 1: Deep link permissions](#remote-access-remote-setup-method-1-deep-link-permissions)
+ [Method 2: AWS Toolkit permissions](#remote-access-remote-setup-method-2-aws-toolkit-permissions)
+ [Method 3: SSH terminal permissions](#remote-access-remote-setup-method-3-ssh-terminal-permissions)

**Important**  
Using broad permissions for `sagemaker:StartSession`, especially with a wildcard resource `*` creates the risk that any user with this permission can initiate a session against any SageMaker Space app in the account. This can lead to the impact of data scientists unintentionally accessing other users’ SageMaker Spaces. For production environments, you should scope down these permissions to specific space ARNs to enforce the principle of least privilege. See [Advanced access control](remote-access-remote-setup-abac.md) for examples of more granular permission policies using resource ARNs, tags, and network-based constraints.

### Method 1: Deep link permissions
<a name="remote-access-remote-setup-method-1-deep-link-permissions"></a>

For users connecting via deep links from the SageMaker UI, use the following permission and attach it to your SageMaker AI [space execution role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-get-execution-role-space) or [domain execution role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-get-execution-role). If the space execution role is not configured, the domain execution role is used by default.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "RestrictStartSessionOnSpacesToUserProfile",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": "arn:*:sagemaker:*:*:space/${sagemaker:DomainId}/*",
            "Condition": {
                "ArnLike": {
                    "sagemaker:ResourceTag/sagemaker:user-profile-arn": "arn:aws:sagemaker:*:*:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
                }
            }
        }
    ]
}
```

------

### Method 2: AWS Toolkit permissions
<a name="remote-access-remote-setup-method-2-aws-toolkit-permissions"></a>

For users connecting through the AWS Toolkit for Visual Studio Code extension, attach the following policy to one of the following:
+ For IAM authentication, attach this policy to the IAM user or role
+ For IdC authentication, attach this policy to the [Permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) managed by the IdC

To show only spaces relevant to the authenticated user, see [Filtering overview](remote-access-remote-setup-filter.md#remote-access-remote-setup-filter-overview).

**Important**  
The following policy using `*` as the resource constraint is only recommended for quick testing purposes. For production environments, you should scope down these permissions to specific space ARNs to enforce the principle of least privilege. See [Advanced access control](remote-access-remote-setup-abac.md) for examples of more granular permission policies using resource ARNs, tags, and network-based constraints.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sagemaker:ListSpaces",
                "sagemaker:DescribeSpace",
                "sagemaker:ListApps",
                "sagemaker:DescribeApp",
                "sagemaker:DescribeDomain",
                "sagemaker:UpdateSpace",
                "sagemaker:CreateApp",
                "sagemaker:DeleteApp",
                "sagemaker:AddTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowStartSessionOnSpaces",
            "Effect": "Allow",
            "Action": "sagemaker:StartSession",
            "Resource": [
                "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/space-name-1",
                "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/space-name-2"
            ]
        }
    ]
}
```

------

### Method 3: SSH terminal permissions
<a name="remote-access-remote-setup-method-3-ssh-terminal-permissions"></a>

For SSH terminal connections, the `StartSession` API is called by the SSH proxy command script below, using the local AWS credentials. See [Configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) for information and instructions on setting up the user's local AWS credentials. To use these permissions:

1. Attach this policy to the IAM user or role associated with the local AWS credentials.

1. If using a named credential profile, modify the proxy command in your SSH config:

   ```
   ProxyCommand '/home/user/sagemaker_connect.sh' '%h' YOUR_CREDENTIAL_PROFILE_NAME
   ```
**Note**  
The policy needs to be attached to the IAM identity (user/role) used in your local AWS credentials configuration, not to the Amazon SageMaker AI domain execution role.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowStartSessionOnSpecificSpaces",
               "Effect": "Allow",
               "Action": "sagemaker:StartSession",
               "Resource": [
                   "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/space-name-1",
                   "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/space-name-2"
               ]
           }
       ]
   }
   ```

------

After setup, users can run `ssh my_studio_space_abc` to start up the space. For more information, see [Method 3: Connect from the terminal via SSH CLI](remote-access-local-ide-setup.md#remote-access-local-ide-setup-local-vs-code-method-3-connect-from-the-terminal-via-ssh-cli).

## Step 2: Enable remote access for your space
<a name="remote-access-remote-setup-enable"></a>

After you set up the permissions, you must toggle on **Remote Access** and start your space in Studio before the user can connect using their Remote IDE. This setup only needs to be done once.

**Note**  
If your users are connecting using [Method 2: AWS Toolkit permissions](#remote-access-remote-setup-method-2-aws-toolkit-permissions), you do not necessarily need this step. AWS Toolkit for Visual Studio users can enable remote access from the Toolkit.

**Activate remote access for your Studio space**

1. [Launch Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-launch.html#studio-updated-launch-console).

1. Open the Studio UI.

1. Navigate to your space.

1. In the space details, toggle on **Remote Access**.

1. Choose **Run space**.

# Advanced access control
<a name="remote-access-remote-setup-abac"></a>

Amazon SageMaker AI supports [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) to achieve fine-grained access control for Remote IDE connections using ABAC policies. The following are example ABAC policies for Remote IDE connections.

**Topics**
+ [Remote access enforcement](#remote-access-remote-setup-abac-remote-access-enforcement)
+ [Tag-based access control](#remote-access-remote-setup-abac-tag-based-access-control)

## Remote access enforcement
<a name="remote-access-remote-setup-abac-remote-access-enforcement"></a>

Control access to resources using the `sagemaker:RemoteAccess` condition key. This is supported by both `CreateSpace` and `UpdateSpace` APIs. The following example uses `CreateSpace`. 

You can ensure that users cannot create spaces with remote access enabled. This helps maintain security by defaulting to more restricted access settings. The following policy ensures users can:
+ Create new Studio spaces where remote access is explicitly disabled
+ Create new Studio spaces without specifying any remote access settings

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DenyCreateSpaceRemoteAccessEnabled",
            "Effect": "Deny",
            "Action": [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
            ],
            "Resource": "arn:aws:sagemaker:*:*:space/*",
            "Condition": {
                "StringEquals": {
                    "sagemaker:RemoteAccess": [
                        "ENABLED"
                    ]
                }
            }
        },
        {
            "Sid": "AllowCreateSpace",
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
            ],
            "Resource": "arn:aws:sagemaker:*:*:space/*"
        }
    ]
}
```

------

## Tag-based access control
<a name="remote-access-remote-setup-abac-tag-based-access-control"></a>

Implement [tag-based](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/what-are-tags.html) access control to restrict connections based on resource and principal tags.

You can ensure users can only access resources appropriate for their role and project assignments. You can use the following policy to:
+ Allow users to connect only to spaces that match their assigned team, environment, and cost center
+ Implement fine-grained access control based on organizational structure

In the following example, the space is tagged with the following:

```
{ "Team": "ML", "Environment": "Production", "CostCenter": "12345" }
```

You can have a role that contains the following policy to match resource and principal tags:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "RestrictStartSessionOnTaggedSpacesInDomain",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": [
                "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Team": "${aws:PrincipalTag/Team}",
                    "aws:ResourceTag/Environment": "${aws:PrincipalTag/Environment}",
                    "aws:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}",
                    "aws:ResourceTag/IDC_UserName": "${aws:PrincipalTag/IDC_UserName}"
                }
            }
        }
    ]
}
```

------

When the role’s tags match, the user has permission to start the session and remotely connect to their space. See [Control access to AWS resources using tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) for more information.

# Set up Studio to run with subnets without internet access within a VPC
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access"></a>

This guide shows you how to connect to Amazon SageMaker Studio spaces from your Remote IDE when your Amazon SageMaker AI domain runs in private subnets without internet access. You’ll learn about connectivity requirements and setup options to establish secure remote connections in isolated network environments.

You can configure Amazon SageMaker Studio to run in VPC only mode with subnets without internet access. This setup enhances security for your machine learning workloads by operating in an isolated network environment where all traffic flows through the VPC. To enable external communications while maintaining security, use VPC endpoints for AWS services and configure VPC PrivateLink for required AWS dependencies.

**IDE support for private subnet connections**

The following table shows the supported connection methods for each Remote IDE when connecting to Studio spaces in private subnets without internet access.


| Connection method | VS Code | Kiro | Cursor | 
| --- | --- | --- | --- | 
|  HTTP Proxy support  |  Supported  |  Supported  |  Not supported  | 
|  Pre-packaged remote server and extensions  |  Supported  |  Not supported  |  Not supported  | 

**Important**  
Cursor is not supported for connecting to Studio spaces in private subnets without outbound internet access.

**Topics**
+ [Studio remote access network requirements](#remote-access-remote-setup-vpc-subnets-without-internet-access-network-requirements)
+ [Setup Studio remote access network](#remote-access-remote-setup-vpc-subnets-without-internet-access-setup)

## Studio remote access network requirements
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access-network-requirements"></a>

**VPC mode limitations** Studio in VPC mode only supports private subnets. Studio cannot work with subnets directly attached with an Internet Gateway (IGW). Remote IDE connections share the same limitations as SageMaker AI. For more information, see [Connect Studio notebooks in a VPC to external resources](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-and-internet-access.html).

### VPC PrivateLink requirements
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access-vpc-privatelink-requirements"></a>

When SageMaker AI runs in private subnets, configure these SSM VPC endpoints in addition to standard VPC endpoints required for SageMaker. For more information, see [Connect Studio Through a VPC Endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-interface-endpoint.html).
+ `com.amazonaws.REGION.ssm`
+ `com.amazonaws.REGION.ssmmessages`

**VPC endpoint policy recommendations**

The following are the recommended VPC endpoint policies that allow the necessary actions for remote access while using the `aws:PrincipalIsAWSService` condition to ensure only AWS services like Amazon SageMaker AI can make the calls. For more information about the `aws:PrincipalIsAWSService` condition key, see [the documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalisawsservice).

**SSM endpoint policy**

Use the following policy for the `com.amazonaws.REGION.ssm` endpoint:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "ssm:CreateActivation",
                "ssm:RegisterManagedInstance",
                "ssm:DeleteActivation",
                "ssm:DeregisterManagedInstance",
                "ssm:AddTagsToResource",
                "ssm:UpdateInstanceInformation",
                "ssm:UpdateInstanceAssociationStatus",
                "ssm:DescribeInstanceInformation",
                "ssm:ListInstanceAssociations",
                "ssm:ListAssociations",
                "ssm:GetDocument",
                "ssm:PutInventory"
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:PrincipalIsAWSService": "true"
                }
            }
        }
    ]
}
```

**SSM Messages endpoint policy**

Use the following policy for the `com.amazonaws.REGION.ssmmessages` endpoint:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:PrincipalIsAWSService": "true"
                }
            }
        }
    ]
}
```

**VS Code specific network requirements**

Remote VS Code connection requires VS Code remote development, which needs specific network access to install the remote server and extensions. See the [remote development FAQ](https://code.visualstudio.com/docs/remote/faq) in the Visual Studio Code documentation for full network requirements. The following is a summary of the requirements:
+ Access to Microsoft’s VS Code server endpoints is needed to install and update the VS Code remote server.
+ Access to Visual Studio Marketplace and related CDN endpoints is required for installing VS Code extensions through the extension panel (alternatively, extensions can be installed manually using VSIX files without internet connection).
+ Some extensions may require access to additional endpoints for downloading their specific dependencies. See the extension’s documentation for their specific connectivity requirements.

**Kiro specific network requirements**

Remote Kiro connection requires Kiro remote development, which needs specific network access to install the remote server and extensions. For firewall and proxy server configuration, see [Kiro firewall configuration](https://kiro.dev/docs/privacy-and-security/firewalls/). The requirements are similar to VS Code:
+ Access to Kiro server endpoints is needed to install and update the Kiro remote server.
+ Access to extension marketplace and related CDN endpoints is required for installing Kiro extensions through the extension panel.
+ Some extensions may require access to additional endpoints for downloading their specific dependencies. See the extension’s documentation for their specific connectivity requirements.

## Setup Studio remote access network
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access-setup"></a>

You have the following options to connect your Remote IDE to Studio spaces in private subnets:
+ HTTP Proxy (supported for VS Code and Kiro)
+ Pre-packaged remote server and extensions (VS Code only)

### Set up HTTP Proxy with controlled allow-listing
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access-setup-http-proxy-with-controlled-allow-listing"></a>

When your Studio space is behind a firewall or proxy, allow access to your IDE server and extension-related CDNs and endpoints.

1. Set up a public subnet to run the HTTP proxy (such as Squid), where you can configure which websites to allow. Ensure that the HTTP proxy is accessible by SageMaker spaces.

1. The public subnet can be in the same VPC used by the Studio or in separate VPC peered with all the VPCs used by Amazon SageMaker AI domains.

### Set up Pre-packaged remote server and extensions (VS Code only)
<a name="remote-access-remote-setup-vpc-subnets-without-internet-access-setup-pre-packaged-vs-code-remote-server-and-extensions"></a>

**Note**  
This option is only available for Visual Studio Code. Kiro and Cursor do not support pre-packaged remote server setup.

When your Studio spaces can’t access external endpoints to download VS Code remote server and extensions, you can pre-package them. With this approach, you export a tarball containing the `.VS Code-server` directory for a specific version of VS Code. Then, you use a SageMaker AI Lifecycle Configuration (LCC) script to copy and extract the tarball into the home directory (`/home/sagemaker-user`) of the Studio spaces. This LCC-based solution works with both AWS-provided and custom images. Even when you’re not using private subnets, this approach accelerates the setup of the VS Code remote server and pre-installed extensions.

**Instructions for pre-packaging your VS Code remote server and extensions**

1. Install VS Code on your local machine.

1. Launch a Linux-based (x64) Docker container with SSH enabled, either locally or via a Studio space with internet access. We recommend using a temporary Studio space with remote access and internet enabled for simplicity.

1. Connect your installed VS Code to the local Docker container via Remote SSH or connect to the Studio space via the Studio remote VS Code feature. VS Code installs the remote server into `.VS Code-server` in the home directory in the remote container during connection. See [Example Dockerfile usage for pre-packaging your VS Code remote server and extensions](remote-access-local-ide-setup-vpc-no-internet.md#remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions-example-dockerfile) for more information.

1. After connecting remotely, ensure you use the VS Code Default profile.

1. Install the required VS Code extensions and validate their functionality. For example, create and run a notebook to install Jupyter notebook-related extensions in the VS Code remote server.

   Ensure you [install the AWS Toolkit for Visual Studio Code extension](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/setup.html) after connecting to the remote container.

1. Archive the `$HOME/.VS Code-server` directory (for example, `VS Code-server-with-extensions-for-1.100.2.tar.gz`) in either the local Docker container or in the terminal of the remotely connected Studio space.

1. Upload the tarball to Amazon S3.

1. Create an [LCC script](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lifecycle-configurations.html) ([Example LCC script (LCC-install-VS Code-server-v1.100.2)](remote-access-local-ide-setup-vpc-no-internet.md#remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions-example-lcc)) that:
   + Downloads the specific archive from Amazon S3.
   + Extracts it into the home directory when a Studio space in a private subnet launches.

1. (Optional) Extend the LCC script to support per-user VS Code server tarballs stored in user-specific Amazon S3 folders.

1. (Optional) Maintain version-specific LCC scripts ([Example LCC script (LCC-install-VS Code-server-v1.100.2)](remote-access-local-ide-setup-vpc-no-internet.md#remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions-example-lcc)) that you can attach to your spaces, ensuring compatibility between your local VS Code client and the remote server.

# Set up automated Studio space filtering when using the AWS Toolkit
<a name="remote-access-remote-setup-filter"></a>

Users can filter spaces in the AWS Toolkit for Visual Studio Code explorer to display only relevant spaces. This section provides information on filtering and how to set up automated filtering.

This setup only applies when using the [Method 2: AWS Toolkit in the Remote IDE](remote-access-local-ide-setup.md#remote-access-local-ide-setup-local-vs-code-method-2-aws-toolkit-in-vs-code) method to connect from your Remote IDE to Amazon SageMaker Studio spaces. See [Set up remote access](remote-access-remote-setup.md) for more information.

**Topics**
+ [Filtering overview](#remote-access-remote-setup-filter-overview)
+ [Set up when connecting with IAM credentials](#remote-access-remote-setup-filter-set-up-iam-credentials)

## Filtering overview
<a name="remote-access-remote-setup-filter-overview"></a>

**Manual filtering** allows users to manually select which user profiles to display spaces for through the AWS Toolkit interface. This method works for all authentication types and takes precedence over automated filtering. To manually filter, see [Manual filtering](remote-access-local-ide-setup-filter.md#remote-access-local-ide-setup-filter-manual).

**Automated filtering** automatically shows only spaces relevant to the authenticated user. This filtering behavior depends on the authentication method during sign-in. See [connecting to AWS from the Toolkit](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/connect.html#connect-to-aws) in the Toolkit for VS Code User Guide for more information. The following lists the sign-in options.
+ **Authenticate and connect with SSO**: Automated filtering works by default.
+ **Authenticate and connect with IAM credentials**: Automated filtering **requires administrator setup** for the following IAM credentials. Without this setup, AWS Toolkit cannot identify which spaces belong to the user, so all spaces are shown by default.
  + **Using IAM user credentials**
  + **Using assumed IAM role session credentials**

## Set up when connecting with IAM credentials
<a name="remote-access-remote-setup-filter-set-up-iam-credentials"></a>

**When using IAM user credentials**

Toolkit for VS Code can match spaces belonging to user profiles that start with the authenticated IAM user name or assumed role session name. To set this up:

**Note**  
Administrators must configure Studio user profile names to follow this naming convention for automated filtering to work correctly.
+ Administrators must ensure Studio user profile names follow the naming convention:
  + For IAM users: prefix with `IAM-user-name-`
  + For assumed roles: prefix with `assumed-role-session-name-`
+ `aws sts get-caller-identity` returns the identity information used for matching
+ Spaces belonging to the matched user profiles will be automatically filtered in the Toolkit for VS Code

**When using assumed IAM role session credentials** In addition to the setup when using IAM user credentials above, you will need to ensure session ARNs include user identifiers as prefixes that match. You can configure trust policies that ensure session ARNs include user identifiers as prefixes. [Create a trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and attach it to the assumed role used for authentication.

This setup is not required for direct IAM user credentials or IdC authentication.

**Set up trust policy for IAM role session credentials example** Create a trust policy that enforces role sessions to include the IAM user name. The following is an example policy:

```
{
    "Statement": [
        {
            "Sid": "RoleTrustPolicyRequireUsernameForSessionName",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Principal": {"AWS": "arn:aws:iam::ACCOUNT:root"},
            "Condition": {
                "StringLike": {"sts:RoleSessionName": "${aws:username}"}
            }
        }
    ]
}
```

# Set up Remote IDE
<a name="remote-access-local-ide-setup"></a>

After administrators complete the instructions in [Connect your Remote IDE to SageMaker spaces with remote access](remote-access.md), you can connect your Remote IDE to your remote SageMaker spaces.

**Topics**
+ [Set up your local environment](#remote-access-local-ide-setup-local-environment)
+ [Connect to your Remote IDE](#remote-access-local-ide-setup-local-vs-code)
+ [Connect to VPC with subnets without internet access](remote-access-local-ide-setup-vpc-no-internet.md)
+ [Filter your Studio spaces](remote-access-local-ide-setup-filter.md)

## Set up your local environment
<a name="remote-access-local-ide-setup-local-environment"></a>

Install your preferred Remote IDE on your local machine:
+ [Visual Studio Code](https://code.visualstudio.com/)
+ [Kiro](https://kiro.dev/)
+ [Cursor](https://cursor.com/home)

For information on the version requirements, see [IDE version requirements](remote-access.md#remote-access-ide-version-requirements).

## Connect to your Remote IDE
<a name="remote-access-local-ide-setup-local-vs-code"></a>

Before you can establish a connection from your Remote IDE to your remote SageMaker spaces, your administrator must [Set up remote access](remote-access-remote-setup.md). Your administrator sets up a specific method for you to establish a connection. Choose the method that was set up for you.

**Topics**
+ [Method 1: Deep link from Studio UI](#remote-access-local-ide-setup-local-vs-code-method-1-deep-link-from-studio-ui)
+ [Method 2: AWS Toolkit in the Remote IDE](#remote-access-local-ide-setup-local-vs-code-method-2-aws-toolkit-in-vs-code)
+ [Method 3: Connect from the terminal via SSH CLI](#remote-access-local-ide-setup-local-vs-code-method-3-connect-from-the-terminal-via-ssh-cli)

### Method 1: Deep link from Studio UI
<a name="remote-access-local-ide-setup-local-vs-code-method-1-deep-link-from-studio-ui"></a>

Use the following procedure to establish a connection using deep link.

1. [Launch Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-launch.html#studio-updated-launch-console).

1. In the Studio UI, navigate to your space.

1. Choose **Open in VS Code**, **Open in Kiro**, or **Open in Cursor** button for your preferred IDE. Ensure that your preferred IDE is already installed on your local computer.

1. When prompted, confirm to open your IDE. Your IDE opens with another pop-up to confirm. Once completed, the remote connection is established.

### Method 2: AWS Toolkit in the Remote IDE
<a name="remote-access-local-ide-setup-local-vs-code-method-2-aws-toolkit-in-vs-code"></a>

Use the following procedure to establish a connection using the AWS Toolkit for Visual Studio Code. This method is available for VS Code, Kiro, and Cursor.

1. Open your Remote IDE (VS Code, Kiro, or Cursor).

1. Open the AWS Toolkit extension.

1. [Connect to AWS](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/connect.html).

1. In the AWS Explorer, expand **SageMaker AI**, then expand **Studio**.

1. Find your Studio space.

1. Choose the **Connect** icon next to your space to start it.
**Note**  
Stop and restart the space in the Toolkit for Visual Studio to enable remote access, if not already connected.
If the space is not using a supported [instance size](https://docs.aws.amazon.com/sagemaker/latest/dg/remote-access.html#remote-access-instance-requirements), you will be asked to change the instance.

### Method 3: Connect from the terminal via SSH CLI
<a name="remote-access-local-ide-setup-local-vs-code-method-3-connect-from-the-terminal-via-ssh-cli"></a>

Choose one of the following platform options to view the procedure to establish a connection using the SSH CLI.

**Note**  
Ensure that you have the latest versions of the [Local machine prerequisites](remote-access.md#remote-access-local-prerequisites) installed before following the instructions below.
If you [Bring your own image (BYOI)](studio-updated-byoi.md), ensure you have installed the required dependencies listed in [Image requirements](remote-access.md#remote-access-image-requirements) before proceeding

------
#### [ Linux/macOS ]

Create a shell script (for example, `/home/user/sagemaker_connect.sh`):

```
#!/bin/bash
# Disable the -x option if printing each command is not needed.
set -exuo pipefail

SPACE_ARN="$1"
AWS_PROFILE="${2:-}"

# Validate ARN and extract region
if [[ "$SPACE_ARN" =~ ^arn:aws[-a-z]*:sagemaker:([a-z0-9-]+):[0-9]{12}:space\/[^\/]+\/[^\/]+$ ]]; then
    AWS_REGION="${BASH_REMATCH[1]}"
else
    echo "Error: Invalid SageMaker Studio Space ARN format."
    exit 1
fi

# Optional profile flag
PROFILE_ARG=()
if [[ -n "$AWS_PROFILE" ]]; then
    PROFILE_ARG=(--profile "$AWS_PROFILE")
fi

# Start session
START_SESSION_JSON=$(aws sagemaker start-session \
    --resource-identifier "$SPACE_ARN" \
    --region "${AWS_REGION}" \
    "${PROFILE_ARG[@]}")

# Extract fields using grep and sed
SESSION_ID=$(echo "$START_SESSION_JSON" | grep -o '"SessionId": "[^"]*"' | sed 's/.*: "//;s/"$//')
STREAM_URL=$(echo "$START_SESSION_JSON" | grep -o '"StreamUrl": "[^"]*"' | sed 's/.*: "//;s/"$//')
TOKEN=$(echo "$START_SESSION_JSON" | grep -o '"TokenValue": "[^"]*"' | sed 's/.*: "//;s/"$//')

# Validate extracted values
if [[ -z "$SESSION_ID" || -z "$STREAM_URL" || -z "$TOKEN" ]]; then
    echo "Error: Failed to extract session information from sagemaker start session response."
    exit 1
fi

# Call session-manager-plugin
session-manager-plugin \
    "{\"streamUrl\":\"$STREAM_URL\",\"tokenValue\":\"$TOKEN\",\"sessionId\":\"$SESSION_ID\"}" \
    "$AWS_REGION" "StartSession"
```

1. Make the script executable:

   ```
   chmod +x /home/user/sagemaker_connect.sh
   ```

1. Configure `$HOME/.ssh/config` to add the following entry:

```
Host space-name
  HostName 'arn:PARTITION:sagemaker:us-east-1:111122223333:space/domain-id/space-name'
  ProxyCommand '/home/user/sagemaker_connect.sh' '%h'
  ForwardAgent yes
  AddKeysToAgent yes
  StrictHostKeyChecking accept-new
```

For example, the `PARTITION` can be `aws`.

If you need to use a [named AWS credential profile](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-using-profiles), change the proxy command as follows:

```
  ProxyCommand '/home/user/sagemaker_connect.sh' '%h' YOUR_CREDENTIAL_PROFILE_NAME
```
+ Connect via SSH or run SCP command:

```
ssh space-name
scp file_abc space-name:/tmp/
```

------
#### [ Windows ]

**Prerequisites for Windows:**
+ PowerShell 5.1 or later
+ SSH client (OpenSSH recommended)

Create a PowerShell script (for example, `C:\Users\user-name\sagemaker_connect.ps1`):

```
# sagemaker_connect.ps1
param(
    [Parameter(Mandatory=$true)]
    [string]$SpaceArn,

    [Parameter(Mandatory=$false)]
    [string]$AwsProfile = ""
)

# Enable error handling
$ErrorActionPreference = "Stop"

# Validate ARN and extract region
if ($SpaceArn -match "^arn:aws[-a-z]*:sagemaker:([a-z0-9-]+):[0-9]{12}:space\/[^\/]+\/[^\/]+$") {
    $AwsRegion = $Matches[1]
} else {
    Write-Error "Error: Invalid SageMaker Studio Space ARN format."
    exit 1
}

# Build AWS CLI command
$awsCommand = @("sagemaker", "start-session", "--resource-identifier", $SpaceArn, "--region", $AwsRegion)

if ($AwsProfile) {
    $awsCommand += @("--profile", $AwsProfile)
}

try {
    # Start session and capture output
    Write-Host "Starting SageMaker session..." -ForegroundColor Green
    $startSessionOutput = & aws @awsCommand

    # Try to parse JSON response
    try {
        $sessionData = $startSessionOutput | ConvertFrom-Json
    } catch {
        Write-Error "Failed to parse JSON response: $_"
        Write-Host "Raw response was:" -ForegroundColor Yellow
        Write-Host $startSessionOutput
        exit 1
    }

    $sessionId = $sessionData.SessionId
    $streamUrl = $sessionData.StreamUrl
    $token = $sessionData.TokenValue

    # Validate extracted values
    if (-not $sessionId -or -not $streamUrl -or -not $token) {
        Write-Error "Error: Failed to extract session information from sagemaker start session response."
        Write-Host "Parsed response was:" -ForegroundColor Yellow
        Write-Host ($sessionData | ConvertTo-Json)
        exit 1
    }

    Write-Host "Session started successfully. Connecting..." -ForegroundColor Green

    # Create session manager plugin command
    $sessionJson = @{
        streamUrl = $streamUrl
        tokenValue = $token
        sessionId = $sessionId
    } | ConvertTo-Json -Compress

    # Escape the JSON string
    $escapedJson = $sessionJson -replace '"', '\"'

    # Call session-manager-plugin
    & session-manager-plugin "$escapedJson" $AwsRegion "StartSession"

} catch {
    Write-Error "Failed to start session: $_"
    exit 1
}
```
+ Configure `C:\Users\user-name\.ssh\config` to add the following entry:

```
Host space-name                            
  HostName "arn:aws:sagemaker:us-east-1:111122223333:space/domain-id/space-name"
  ProxyCommand "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" -ExecutionPolicy RemoteSigned -File "C:\\Users\\user-name\\sagemaker_connect.ps1" "%h"
  ForwardAgent yes
  AddKeysToAgent yes
  User sagemaker-user
  StrictHostKeyChecking accept-new
```

------

# Connect to VPC with subnets without internet access
<a name="remote-access-local-ide-setup-vpc-no-internet"></a>

Before connecting your Remote IDE to Studio spaces in private subnets without internet access, ensure your administrator has [Set up Studio to run with subnets without internet access within a VPC](remote-access-remote-setup-vpc-subnets-without-internet-access.md).

You have the following options to connect your Remote IDE to Studio spaces in private subnets:
+ Set up HTTP Proxy (supported for VS Code and Kiro)
+ Pre-packaged remote server and extensions (VS Code only)

**Important**  
Cursor is not supported for connecting to Studio spaces in private subnets without outbound internet access.

**Topics**
+ [HTTP Proxy with controlled allow-listing](#remote-access-local-ide-setup-vpc-no-internet-http-proxy-with-controlled-allow-listing)
+ [Pre-packaged remote server and extensions (VS Code only)](#remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions)

## HTTP Proxy with controlled allow-listing
<a name="remote-access-local-ide-setup-vpc-no-internet-http-proxy-with-controlled-allow-listing"></a>

When your Studio space is behind a firewall or proxy, ask your administrator to allow access to your IDE server and extension-related CDNs and endpoints. For more information, see [Set up HTTP Proxy with controlled allow-listing](remote-access-remote-setup-vpc-subnets-without-internet-access.md#remote-access-remote-setup-vpc-subnets-without-internet-access-setup-http-proxy-with-controlled-allow-listing).

------
#### [ VS Code ]

Configure the HTTP proxy for VS Code remote development by providing the proxy URL with the `remote.SSH.httpProxy` or `remote.SSH.httpsProxy` setting.

**Note**  
Consider enabling "Remote.SSH: Use Curl And Wget Configuration Files" to use the configuration from the remote environment’s `curlrc` and `wgetrc` files. This is so that the `curlrc` and `wgetrc` files, placed in their respective default locations in the SageMaker space, can be used for enabling certain cases.

------
#### [ Kiro ]

Configure the HTTP proxy for Kiro remote development by setting the `aws.sagemaker.ssh.kiro.httpsProxy` setting to your HTTP or HTTPS proxy endpoint.

If you use MCP (Model Context Protocol) servers in Kiro, you also need to add the proxy environment variables to your MCP server configuration:

```
"env": {
    "http_proxy": "${http_proxy}",
    "https_proxy": "${https_proxy}"
}
```

------

This option works when you are allowed to set up HTTP proxy and lets you install additional extensions flexibly, as some extensions require a public endpoint.

## Pre-packaged remote server and extensions (VS Code only)
<a name="remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions"></a>

**Note**  
This option is only available for Visual Studio Code. Kiro and Cursor do not support pre-packaged remote server setup.

When your Studio spaces can’t access external endpoints to download VS Code remote server and extensions, you can pre-package them. With this approach, your administrator can export a tarball containing the `.VS Code-server` directory for a specific version of VS Code. Then, the administrator uses a SageMaker AI Lifecycle Configuration (LCC) script to copy and extract the tarball into your home directory (`/home/sagemaker-user`). For more information, see [Set up Pre-packaged remote server and extensions (VS Code only)](remote-access-remote-setup-vpc-subnets-without-internet-access.md#remote-access-remote-setup-vpc-subnets-without-internet-access-setup-pre-packaged-vs-code-remote-server-and-extensions).

**Instructions for using pre-packaging for your VS Code remote server and extensions**

1. Install VS Code on your local machine

1. When you connect to the SageMaker space:
   + Use the Default profile to ensure compatibility with pre-packaged extensions. Otherwise, you’ll need to install extensions using downloaded VSIX files after connecting to the Studio space.
   + Choose a VS Code version specific LCC script to attach to the space when you launch the space.

### Example Dockerfile usage for pre-packaging your VS Code remote server and extensions
<a name="remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions-example-dockerfile"></a>

The following is a sample Dockerfile to launch a local container with SSH server pre-installed, if it is not possible to create a space with remote access and internet enabled.

**Note**  
In this example the SSH server does not require authentication and is only used for exporting the VS Code remote server.
The container should be built and run on an x64 architecture.

```
FROM amazonlinux:2023

# Install OpenSSH server and required tools
RUN dnf install -y \
    openssh-server \
    shadow-utils \
    passwd \
    sudo \
    tar \
    gzip \
    && dnf clean all

# Create a user with no password
RUN useradd -m -s /bin/bash sagemaker-user && \
    passwd -d sagemaker-user

# Add sagemaker-user to sudoers via wheel group
RUN usermod -aG wheel sagemaker-user && \
    echo 'sagemaker-user ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/sagemaker-user && \
    chmod 440 /etc/sudoers.d/sagemaker-user

# Configure SSH to allow empty passwords and password auth
RUN sed -i 's/^#\?PermitEmptyPasswords .*/PermitEmptyPasswords yes/' /etc/ssh/sshd_config && \
    sed -i 's/^#\?PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Generate SSH host keys
RUN ssh-keygen -A

# Expose SSH port
EXPOSE 22

WORKDIR /home/sagemaker-user
USER sagemaker-user

# Start SSH server
CMD ["bash"]
```

Use the following commands to build and run the container:

```
# Build the image
docker build . -t remote_server_export

# Run the container
docker run --rm -it -d \
  -v /tmp/remote_access/.VS Code-server:/home/sagemaker-user/.VS Code-server \
  -p 2222:22 \
  --name remote_server_export \
  remote_server_export
  
# change the permisson for the mounted folder
docker exec -i remote_server_export \
       bash -c 'sudo chown sagemaker-user:sagemaker-user ~/.VS Code-server'

# start the ssh server in the container 
docker exec -i remote_server_export bash -c 'sudo /usr/sbin/sshd -D &'
```

Connect using the following command:

```
ssh sagemaker-user@localhost -p 2222
```

Before this container can be connected, configure the following in the `.ssh/config` file. Afterwards you will be able to see the `remote_access_export` as a host name in the remote SSH side panel when connecting. For example:

```
Host remote_access_export
  HostName localhost
  User=sagemaker-user
  Port 2222
  ForwardAgent yes
```

Archive `/tmp/remote_access/.VS Code-server` and follow the steps in Pre-packaged VS Code remote server and extensions to connect and install the extension. After unzipping, ensure that the `.VS Code-server` folder shows as the parent folder.

```
cd /tmp/remote_access/
sudo tar -czvf VS Code-server-with-extensions-for-1.100.2.tar.gz .VS Code-server
```

### Example LCC script (LCC-install-VS Code-server-v1.100.2)
<a name="remote-access-local-ide-setup-vpc-no-internet-pre-packaged-vs-code-remote-server-and-extensions-example-lcc"></a>

The following is an example of how to install a specific version of VS Code remote server.

```
#!/bin/bash

set -x

remote_server_file=VS Code-server-with-extensions-for-1.100.2.tar.gz

if [ ! -d "${HOME}/.VS Code-server" ]; then
    cd /tmp
    aws s3 cp s3://S3_BUCKET/remote_access/${remote_server_file} .
    tar -xzvf ${remote_server_file}
    mv .VS Code-server "${HOME}"
    rm ${remote_server_file}
else
    echo "${HOME}/.VS Code-server already exists, skipping download and install."
fi
```

# Filter your Studio spaces
<a name="remote-access-local-ide-setup-filter"></a>

You can use filtering to display only the relevant Amazon SageMaker AI spaces in the AWS Toolkit for Visual Studio Code explorer. The following provides information on manual filtering and automated filtering. For more information on the definitions of manual and automatic filtering, see [Filtering overview](remote-access-remote-setup-filter.md#remote-access-remote-setup-filter-overview).

This setup only applies when using the [Method 2: AWS Toolkit in the Remote IDE](remote-access-local-ide-setup.md#remote-access-local-ide-setup-local-vs-code-method-2-aws-toolkit-in-vs-code) method to connect from your Remote IDE to Amazon SageMaker Studio spaces. See [Set up remote access](remote-access-remote-setup.md) for more information.

**Topics**
+ [Manual filtering](#remote-access-local-ide-setup-filter-manual)
+ [Automatic filtering setup when using IAM credentials to sign-in](#remote-access-local-ide-setup-filter-automatic-IAM-credentials)

## Manual filtering
<a name="remote-access-local-ide-setup-filter-manual"></a>

To manually filter displayed spaces:
+ Open your Remote IDE and navigate to the Toolkit for VS Code side panel explorer
+ Find the **SageMaker AI** section
+ Choose the filter icon on the right of the SageMaker AI section header. This will open a dropdown menu.
+ In the dropdown menu, select the user profiles for which you want to display spaces

## Automatic filtering setup when using IAM credentials to sign-in
<a name="remote-access-local-ide-setup-filter-automatic-IAM-credentials"></a>

Automated filtering depends on the authentication method during sign-in. See [Connecting to AWS from the Toolkit](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/connect.html#connect-to-aws) in the Toolkit for VS Code User Guide for more information.

When you authenticate and connect with **IAM Credentials**, automated filtering requires [Set up when connecting with IAM credentials](remote-access-remote-setup-filter.md#remote-access-remote-setup-filter-set-up-iam-credentials). Without this setup, if users opt-in for identity filtering, no spaces will be shown.

Once the above is set up, AWS Toolkit matches spaces belonging to user profiles that start with the authenticated IAM user name or assumed role session name.

Automatic filtering is opt-in for users:
+ Open your Remote IDE settings
+ Navigate to the **AWS Toolkit** extension
+ Find **Enable Identity Filtering**
+ Choose to enable automatic filtration of spaces based on your AWS identity

# Supported AWS Regions
<a name="remote-access-supported-regions"></a>

The following table lists the AWS Regions where Remote IDE connections to Studio spaces are supported, along with the IDEs available in each Region.


| AWS Region | VS Code | Kiro | Cursor | 
| --- | --- | --- | --- | 
| us-east-1 | Supported | Supported | Supported | 
| us-east-2 | Supported | Supported | Supported | 
| us-west-1 | Supported | Supported | Supported | 
| us-west-2 | Supported | Supported | Supported | 
| af-south-1 | Supported | Supported | Supported | 
| ap-east-1 | Supported | Supported | Supported | 
| ap-south-1 | Supported | Supported | Supported | 
| ap-northeast-1 | Supported | Supported | Supported | 
| ap-northeast-2 | Supported | Supported | Supported | 
| ap-northeast-3 | Supported | Supported | Supported | 
| ap-southeast-1 | Supported | Supported | Supported | 
| ap-southeast-2 | Supported | Supported | Supported | 
| ap-southeast-3 | Supported | Supported | Supported | 
| ap-southeast-5 | Supported | Supported | Supported | 
| ca-central-1 | Supported | Supported | Supported | 
| eu-central-1 | Supported | Supported | Supported | 
| eu-central-2 | Supported | Supported | Supported | 
| eu-north-1 | Supported | Supported | Supported | 
| eu-south-1 | Supported | Supported | Supported | 
| eu-south-2 | Supported | Supported | Supported | 
| eu-west-1 | Supported | Supported | Supported | 
| eu-west-2 | Supported | Supported | Supported | 
| eu-west-3 | Supported | Supported | Supported | 
| il-central-1 | Supported | Supported | Supported | 
| me-central-1 | Supported | Not supported | Not supported | 
| me-south-1 | Supported | Not supported | Not supported | 
| sa-east-1 | Supported | Supported | Supported | 

# Bring your own image (BYOI)
<a name="studio-updated-byoi"></a>

An image is a file that identifies the kernels, language packages, and other dependencies required to run your applications. It includes:
+ Programming languages (like Python or R)
+ Kernels
+ Libraries and packages
+ Other necessary software

Amazon SageMaker Distribution (`sagemaker-distribution`) is a set of Docker images that include popular frameworks and packages for machine learning, data science, and visualization. For more information, see [SageMaker Studio image support policy](sagemaker-distribution.md).

If you need different functionality, you can bring your own image (BYOI). You may want to create a custom image if:
+ You need a specific version of a programming language or library
+ You want to include custom tools or packages
+ You're working with specialized software not available in the standard images

## Key terminology
<a name="studio-updated-byoi-basics"></a>

The following section defines key terms for bringing your own image to use with SageMaker AI.
+ **Dockerfile:** A text-based document with instructions for building a Docker image. This identifies the language packages and other dependencies for your Docker image.
+ **Docker image:** A packaged set of software and dependencies built from a Dockerfile.
+ **SageMaker AI image store:** A storage of your custom images in SageMaker AI.

**Topics**
+ [Key terminology](#studio-updated-byoi-basics)
+ [Custom image specifications](studio-updated-byoi-specs.md)
+ [How to bring your own image](studio-updated-byoi-how-to.md)
+ [Launch a custom image in Studio](studio-updated-byoi-how-to-launch.md)
+ [View your custom image details](studio-updated-byoi-view-images.md)
+ [Speed up container startup with SOCI](soci-indexing.md)
+ [Detach and clean up custom image resources](studio-updated-byoi-how-to-detach-from-domain.md)

# Custom image specifications
<a name="studio-updated-byoi-specs"></a>

The image that you specify in your Dockerfile must match the specifications in the following sections to create the image successfully.

**Topics**
+ [Running the image](#studio-updated-byoi-specs-run)
+ [Specifications for the user and file system](#studio-updated-byoi-specs-user-and-filesystem)
+ [Health check and URL for applications](#studio-updated-byoi-specs-app-healthcheck)
+ [Dockerfile samples](#studio-updated-byoi-specs-dockerfile-templates)

## Running the image
<a name="studio-updated-byoi-specs-run"></a>

The following configurations can be made by updating your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerConfig.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerConfig.html). For an example, see [Update container configuration](studio-updated-byoi-how-to-container-configuration.md).
+ `Entrypoint` – You can configure `ContainerEntrypoint` and `ContainerArguments` that are passed to the container at runtime. We recommend configuring your entry point using `ContainerConfig`. See the above link for an example.
+ `EnvVariables` – When using Studio, you can define custom `ContainerEnvironment` variables for your container. You can optionally update your environmental variables using `ContainerConfig`. See the above link for an example.

  SageMaker AI-specific environment variables take precedence and will override any variables with the same names. For example, SageMaker AI automatically provides environment variables prefixed with `AWS_` and `SAGEMAKER_` to ensure proper integration with AWS services and SageMaker AI functionality. The following are a few example SageMaker AI-specific environment variables:
  + `AWS_ACCOUNT_ID`
  + `AWS_REGION`
  + `AWS_DEFAULT_REGION`
  + `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
  + `SAGEMAKER_SPACE_NAME`
  + `SAGEMAKER_APP_TYPE`

## Specifications for the user and file system
<a name="studio-updated-byoi-specs-user-and-filesystem"></a>
+ `WorkingDirectory` – The Amazon EBS volume for your space is mounted on the path `/home/sagemaker-user`. You can't change the mount path. Use the `WORKDIR` instruction to set the working directory of your image to a folder within `/home/sagemaker-user`.
+ `UID` – The user ID of the Docker container. UID=1000 is a supported value. You can add sudo access to your users. The IDs are remapped to prevent a process running in the container from having more privileges than necessary.
+ `GID` – The group ID of the Docker container. GID=100 is a supported value. You can add sudo access to your users. The IDs are remapped to prevent a process running in the container from having more privileges than necessary.
+ Metadata directories – The `/opt/.sagemakerinternal` and `/opt/ml` directories that are used by AWS. The metadata file in `/opt/ml` contains metadata about resources such as `DomainId`.

  Use the following command to show the file system contents:

  ```
  cat /opt/ml/metadata/resource-metadata.json
  ```
+ Logging directories – `/var/log/studio` are reserved for the logging directories of your applications and the extensions associated with it. We recommend that you don't use these folders in creating your image.

## Health check and URL for applications
<a name="studio-updated-byoi-specs-app-healthcheck"></a>

The health check and URL depend on the applications. Choose the following link associated with the application you are building the image for.
+ [Health check and URL for applications](code-editor-custom-images.md#code-editor-custom-images-app-healthcheck) for Code Editor
+ [Health check and URL for applications](studio-updated-jl-admin-guide-custom-images.md#studio-updated-jl-admin-guide-custom-images-app-healthcheck) for JupyterLab

## Dockerfile samples
<a name="studio-updated-byoi-specs-dockerfile-templates"></a>

For Dockerfile samples that meet both the requirements on this page and your specific application needs, navigate to the sample Dockerfiles in the respective application's section. The following options include Amazon SageMaker Studio applications. 
+ [Dockerfile examples](code-editor-custom-images.md#code-editor-custom-images-dockerfile-templates) for Code Editor
+ [Dockerfile examples](studio-updated-jl-admin-guide-custom-images.md#studio-updated-jl-custom-images-dockerfile-templates) for JupyterLab

**Note**  
If you are bringing your own image to SageMaker Unified Studio, you will need to follow the [Dockerfile specifications](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-specifications.html) in the *Amazon SageMaker Unified Studio User Guide*.  
`Dockerfile` examples for SageMaker Unified Studio can be found in [Dockerfile example](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-specifications.html#byoi-specifications-example) in the *Amazon SageMaker Unified Studio User Guide*.

# How to bring your own image
<a name="studio-updated-byoi-how-to"></a>

The following pages will provide instructions on how to bring your own custom image. Ensure that the following prerequisites are satisfied before continuing.

## Prerequisites
<a name="studio-updated-byoi-how-to-prerequisites"></a>

You will need to complete the following prerequisites to bring your own image to Amazon SageMaker AI.
+ Set up the Docker application. For more information, see [Get started](https://docs.docker.com/get-started/) in the *Docker documentation*.
+ Install the latest AWS CLI by following the steps in [Getting started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the *AWS Command Line Interface User Guide for Version 2*.
+ Permissions to access the Amazon Elastic Container Registry (Amazon ECR) service. For more information, see [Amazon ECR Managed Policies](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html) in the *Amazon ECR User Guide*.
+ An AWS Identity and Access Management role that has the [AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) policy attached.

**Topics**
+ [Prerequisites](#studio-updated-byoi-how-to-prerequisites)
+ [Create a custom image and push to Amazon ECR](studio-updated-byoi-how-to-prepare-image.md)
+ [Attach your custom image to your domain](studio-updated-byoi-how-to-attach-to-domain.md)
+ [Update container configuration](studio-updated-byoi-how-to-container-configuration.md)

# Create a custom image and push to Amazon ECR
<a name="studio-updated-byoi-how-to-prepare-image"></a>

This page provides instructions on how to create a local Dockerfile, build the container image, and add it to Amazon Elastic Container Registry (Amazon ECR).

**Note**  
In the following examples, the tags are not specified, and the tag `latest` is applied by default. If you would like to specify a tag, you will need to append `:tag` to end of the image names. For more information, see [docker image tag](https://docs.docker.com/reference/cli/docker/image/tag/) in the *Docker documentation*.

**Topics**
+ [Create a local Dockerfile and build the container image](#studio-updated-byoi-how-to-create-local-dockerfile)
+ [Add a Docker image to Amazon ECR](#studio-updated-byoi-add-container-image)

## Create a local Dockerfile and build the container image
<a name="studio-updated-byoi-how-to-create-local-dockerfile"></a>

Use the following instructions to create a Dockerfile with your desired software and dependencies.

**To create your Dockerfile**

1. First set your variables for the AWS CLI commands that follow.

   ```
   LOCAL_IMAGE_NAME=local-image-name
   ```

   `local-image-name` is the name of the container image on your local device, that you define here.

1. Create a text-based document, named `Dockerfile`, that meet the specifications in [Custom image specifications](studio-updated-byoi-specs.md).

   `Dockerfile` examples for supported applications can be found in [Dockerfile samples](studio-updated-byoi-specs.md#studio-updated-byoi-specs-dockerfile-templates).
**Note**  
If you are bringing your own image to SageMaker Unified Studio, you will need to follow the [Dockerfile specifications](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-specifications.html) in the *Amazon SageMaker Unified Studio User Guide*.  
`Dockerfile` examples for SageMaker Unified Studio can be found in [Dockerfile example](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-specifications.html#byoi-specifications-example) in the *Amazon SageMaker Unified Studio User Guide*.

1. In the directory containing your `Dockerfile`, build the Docker image using the following command. The period (`.`) specifies that the `Dockerfile` should be in the context of the build command.

   ```
   docker build -t ${LOCAL_IMAGE_NAME} .
   ```

   After the build completes, you can list your container image information with the following command.

   ```
   docker images
   ```

1. (Optional) You can test your image by using the following command.

   ```
   docker run -it ${LOCAL_IMAGE_NAME}
   ```

   In the output you will find that your server is running at a URL, like `http://127.0.0.1:8888/...`. You can test the image by copying the URL into the browser. 

   If this does not work, you may need to include `-p port:port` in the docker run command. This option maps the exposed port on the container to a port on the host system. For more information about docker run, see the [Running containers](https://docs.docker.com/engine/containers/run/) in the *Docker documentation*.

   Once you have verified that the server is working, you can stop the server and shut down all kernels before continuing. The instructions are viewable the output.

## Add a Docker image to Amazon ECR
<a name="studio-updated-byoi-add-container-image"></a>

To add a container image to Amazon ECR, you will need to do the following.
+ Create an Amazon ECR repository.
+ Log in to your default registry.
+ Push the image to the Amazon ECR repository.

**Note**  
The Amazon ECR repository must be in the same AWS Region as the domain you are attaching the image to.

**To build and push the container image to Amazon ECR**

1. First set your variables for the AWS CLI commands that follow.

   ```
   ACCOUNT_ID=account-id
   REGION=aws-region
   ECR_REPO_NAME=ecr-repository-name
   ```
   + `account-id` is your account ID. You can find this at the top right of any AWS console page. For example, the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).
   + `aws-region` is the AWS Region of your Amazon SageMaker AI domain. You can find this at the top right of any AWS console page. 
   + `ecr-repository-name` is the name of your Amazon Elastic Container Registry repository, that you define here. To view your Amazon ECR repositories, see the [Amazon ECR console](https://console.aws.amazon.com/ecr).

1. Log in to Amazon ECR and sign in to Docker.

   ```
   aws ecr get-login-password \
       --region ${REGION} | \
       docker login \
       --username AWS \
       --password-stdin ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
   ```

   On a successful authentication, you will receive a succeeded log in message.
**Important**  
If you receive an error, you may need to install or upgrade to the latest version of the AWS CLI. For more information, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

1. Tag the image in a format compatible with Amazon ECR, to push to your repository.

   ```
   docker tag \
       ${LOCAL_IMAGE_NAME} \
       ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${ECR_REPO_NAME}
   ```

1. Create an Amazon ECR repository using the AWS CLI. To create the repository using the Amazon ECR console, see [Creating an Amazon ECR private repository to store images](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html).

   ```
   aws ecr create-repository \
       --region ${REGION} \
       --repository-name ${ECR_REPO_NAME}
   ```

1. Push the image to your Amazon ECR repository. You can also tag the Docker image.

   ```
   docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${ECR_REPO_NAME}
   ```

Once the image has been successfully added to your Amazon ECR repository, you can view it in the [Amazon ECR console](https://console.aws.amazon.com/ecr).

# Attach your custom image to your domain
<a name="studio-updated-byoi-how-to-attach-to-domain"></a>

This page provides instructions on how to attach your custom image to your domain. Use the following procedure to use the Amazon SageMaker AI console to navigate to your domain and start the **Attach image** process.

The following instructions assume that you have pushed an image to a Amazon ECR repository in the same AWS Region as your domain. If you have not already done so, see [Create a custom image and push to Amazon ECR](studio-updated-byoi-how-to-prepare-image.md).

When you choose to attach an image, you will have two options:
+ Attach a **New image**: This option will create an image and image version in your SageMaker AI image store and then attach it to your domain.
**Note**  
If you are continuing the BYOI process, from [Create a custom image and push to Amazon ECR](studio-updated-byoi-how-to-prepare-image.md), use the **New image** option.
+ Attach an **Existing image**: If you have already created your intended custom image in the SageMaker AI image store, use this option. This option attaches an existing custom image to your domain. To view your custom images in the SageMaker AI image store, see [View custom image details (console)](studio-updated-byoi-view-images.md#studio-updated-byoi-view-images-console).

------
#### [ New image ]

**To attach a new image to your domain**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Expand the **Admin configurations** section, if not already done so.

1. Under **Admin configurations**, choose **Domains**.

1. From the list of **Domains**, select the domain you want to attach the image to.
**Note**  
If you are attaching the image to a SageMaker Unified Studio project and you need clarification on which domain to use, see [View the SageMaker AI domain details associated with your project](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/view-project-details.html#view-project-details-smai-domain).

1. Open the **Environment** tab.

1. In the **Custom images for personal Studio apps** section, choose **Attach image**.

1. For the **Image source**, choose **New image**.

1. Include your Amazon ECR image URI. The format is as follows.

   ```
   account-id.dkr.ecr.aws-region.amazonaws.com/repository-name:tag
   ```

   1. To obtain your Amazon ECR image URI, navigate to your [Amazon ECR private repositories](https://console.aws.amazon.com/ecr/private-registry/repositories) page.

   1. Choose your repository name link.

   1. Choose the **Copy URI** icon that corresponds to your image version (**Image tag**).

1. Follow the rest of the instructions to attach your custom image.
**Note**  
Ensure that you are using the application type consistent with your `Dockerfile`. For more information, see [Dockerfile samples](studio-updated-byoi-specs.md#studio-updated-byoi-specs-dockerfile-templates).

Once the image has been successfully attached to your domain, you will be able to view it in the **Environment** tab.

------
#### [ Existing image ]

**To attach an existing image to your domain**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Expand the **Admin configurations** section, if not already done so.

1. Under **Admin configurations**, choose **Domains**.

1. From the list of **Domains**, select the domain you want to attach the image to.
**Note**  
If you are attaching the image to a SageMaker Unified Studio project and you need clarification on which domain to use, see [View the SageMaker AI domain details associated with your project](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/view-project-details.html#view-project-details-smai-domain).

1. Open the **Environment** tab.

1. In the **Custom images for personal Studio apps** section, choose **Attach image**.

1. For the **Image source**, choose **Existing image**.

1. Choose an existing image and image version from the SageMaker AI image store.

   If you are unable to view your image version, you may need to create an image version. For more information, see [View custom image details (console)](studio-updated-byoi-view-images.md#studio-updated-byoi-view-images-console).

1. Follow the rest of the instructions to attach your custom image.
**Note**  
Ensure that you are using the application type consistent with your `Dockerfile`. For more information, see [Dockerfile samples](studio-updated-byoi-specs.md#studio-updated-byoi-specs-dockerfile-templates).

Once the image has been successfully attached to your domain, you will be able to view it in the **Environment** tab.

------

Once your image has been successfully attached to your domain, the domain users can choose the image for their application. For more information, see [Launch a custom image in Studio](studio-updated-byoi-how-to-launch.md).

**Note**  
If you have attached a custom image to your SageMaker Unified Studio project, you will need to launch the application from within SageMaker Unified Studio. For more information, see [Launch your custom image](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-launch-custom-image.html) in the *Amazon SageMaker Unified Studio User Guide*.

# Update container configuration
<a name="studio-updated-byoi-how-to-container-configuration"></a>

You can bring custom Docker images into your machine learning workflows. A key aspect of customizing these images is configuring the container configurations, or [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerConfig.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerConfig.html). The following page provides an example on how to configure your `ContainerConfig`. 

An entrypoint is the command or script that runs when the container starts. Custom entrypoints enable you to set up your environment, initialize services, or perform any necessary setup before your application launches. 

This example provides instructions on how to configure a custom entrypoint, for your JupyterLab application, using the AWS CLI. This example assumes that you have already created a custom image and domain. For instructions, see [Attach your custom image to your domain](studio-updated-byoi-how-to-attach-to-domain.md).

1. First set your variables for the AWS CLI commands that follow.

   ```
   APP_IMAGE_CONFIG_NAME=app-image-config-name
   ENTRYPOINT_FILE=entrypoint-file-name
   ENV_KEY=environment-key
   ENV_VALUE=environment-value
   REGION=aws-region
   DOMAIN_ID=domain-id
   IMAGE_NAME=custom-image-name
   IMAGE_VERSION=custom-image-version
   ```
   + `app-image-config-name` is the name of your application image configuration.
   + `entrypoint-file-name` is the name of your container's entrypoint script. For example, `entrypoint.sh`.
   + `environment-key` is the name of your environment variable.
   + `environment-value` is the value assigned to your environment variable.
   + `aws-region` is the AWS Region of your Amazon SageMaker AI domain. You can find this at the top right of any AWS console page. 
   + `domain-id` is your domain ID. To view your domains, see [View domains](domain-view.md).
   + `custom-image-name` is the name of your custom image. To view your custom image details, see [View custom image details (console)](studio-updated-byoi-view-images.md#studio-updated-byoi-view-images-console).

     If you followed the instructions in [Attach your custom image to your domain](studio-updated-byoi-how-to-attach-to-domain.md), you may want to use the same image name you used in that process.
   + `custom-image-version` is the version number of your custom image. This should be an integer, representing the version of your image. To view your custom image details, see [View custom image details (console)](studio-updated-byoi-view-images.md#studio-updated-byoi-view-images-console).

1. Use the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAppImageConfig.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAppImageConfig.html) API to create an image configuration.

   ```
   aws sagemaker create-app-image-config \
       --region ${REGION} \
       --app-image-config-name "${APP_IMAGE_CONFIG_NAME}" \
       --jupyter-lab-app-image-config "ContainerConfig = {
           ContainerEntrypoint = "${ENTRYPOINT_FILE}", 
           ContainerEnvironmentVariables = {
               "${ENV_KEY}"="${ENV_VALUE}"
           }
       }"
   ```

1. Use the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateDomain.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateDomain.html) API to update the default settings for your domain. This will attach the custom image as well as the application image configuration. 

   ```
   aws sagemaker update-domain \
       --region ${REGION} \
       --domain-id "${DOMAIN_ID}" \
       --default-user-settings "{
           \"JupyterLabAppSettings\": {
               \"CustomImages\": [
                   {
                       \"ImageName\": \"${IMAGE_NAME}\",
                       \"ImageVersionNumber\": ${IMAGE_VERSION},
                       \"AppImageConfigName\": \"${APP_IMAGE_CONFIG_NAME}\"
                   }
               ]
           }
       }"
   ```

# Launch a custom image in Studio
<a name="studio-updated-byoi-how-to-launch"></a>

After you have attached a custom image to your Amazon SageMaker AI domain, the image becomes available to the users in the domain. Use the following instructions to launch an application with the custom image.

**Note**  
If you have attached a custom image to your SageMaker Unified Studio project, you will need to launch the application from within SageMaker Unified Studio. For more information, see [Launch your custom image](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/byoi-launch-custom-image.html) in the *Amazon SageMaker Unified Studio User Guide*.

1. Launch Amazon SageMaker Studio. For instructions, see [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. If not done so already, expand the **Applications** section.

1. Choose the application from the **Applications** section. If you do not see the application available, the application may be hidden from you. In this case, contact your administrator.

1. To create a space, choose **\$1 Create *application* space** and follow the instructions to create the space.

   To choose an existing space, choose the link name of the space you want to open.

   

1. Under **Image**, choose the image you want to use.

   If the **Image** dropdown is unavailable, you may need to stop your space. Choose **Stop space** to do so.

1. Confirm the settings for the space and choose **Run space**.

# View your custom image details
<a name="studio-updated-byoi-view-images"></a>

The following page provides instructions on how to view your custom image details in the SageMaker AI image store.

## View custom image details (console)
<a name="studio-updated-byoi-view-images-console"></a>

The following provides instructions on how to view your custom images using the SageMaker AI console. In this section, you can view and edit your image details.

**View your custom images (console)**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Expand the **Admin configurations** section.

1. Under **Admin configurations**, choose **Images**.

1. From the list of **Custom images**, select the hyperlink of your image name.

## View custom image details (AWS CLI)
<a name="studio-updated-byoi-view-images-cli"></a>

The following section shows an example on how to view your custom images using the AWS CLI.

```
aws sagemaker list-images \
       --region aws-region
```

# Speed up container startup with SOCI
<a name="soci-indexing"></a>

SOCI (Seekable Open Container Initiative) indexing enables lazy loading of custom container images in [Amazon SageMaker Studio](studio-updated.md) or [Amazon SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html). SOCI significantly reduces startup times by roughly 30-70% for your custom [Bring your own image (BYOI)](studio-updated-byoi.md) containers. Latency improvement varies depending on the size of the image, hosting instance availability, and other application dependencies. SOCI creates an index that allows containers to launch with only necessary components, fetching additional files on-demand as needed.

SOCI addresses slow container startup times, that interrupt iterative machine learning (ML) development workflows, for custom images. As ML workloads become more complex, container images have grown larger, creating startup delays that hamper development cycles.

**Topics**
+ [Key benefits](#soci-indexing-key-benefits)
+ [How SOCI indexing works](#soci-indexing-how-works)
+ [Architecture components](#soci-indexing-architecture-components)
+ [Supported tools](#soci-indexing-supported-tools)
+ [Permissions for SOCI indexing](soci-indexing-setup.md)
+ [Create SOCI indexes with nerdctl and SOCI CLI example](soci-indexing-example-create-indexes.md)
+ [Integrate SOCI-indexed images with Studio example](soci-indexing-example-integrate-studio.md)

## Key benefits
<a name="soci-indexing-key-benefits"></a>
+ **Faster iteration cycles**: Reduce container startup, depending on image and instance types
+ **Universal optimization**: Extend performance benefits to all custom BYOI containers in Studio

## How SOCI indexing works
<a name="soci-indexing-how-works"></a>

SOCI creates a specialized metadata index that maps your container image's internal file structure. This index enables access to individual files without downloading the entire image. The SOCI index is stored as an OCI (Open Container Initiative) compliant artifact in [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) and linked to your original container image, preserving image digests and signature validity.

When you launch a container in Studio, the system uses the SOCI index to identify and download only essential files needed for startup. Additional components are fetched in parallel as your application requires them.

## Architecture components
<a name="soci-indexing-architecture-components"></a>
+ **Original container image**: Your base container stored in Amazon ECR
+ **SOCI index artifact**: Metadata mapping your image's file structure
+ **OCI Image Index manifest**: Links your original image and SOCI index
+ **Finch container runtime**: Enables lazy loading integration with Studio

## Supported tools
<a name="soci-indexing-supported-tools"></a>


| Tool | Integration | 
| --- | --- | 
| nerdctl | Requires containerd setup | 
| Finch CLI | Native SOCI support | 
| Docker \$1 SOCI CLI | Additional tooling required | 

**Topics**
+ [Key benefits](#soci-indexing-key-benefits)
+ [How SOCI indexing works](#soci-indexing-how-works)
+ [Architecture components](#soci-indexing-architecture-components)
+ [Supported tools](#soci-indexing-supported-tools)
+ [Permissions for SOCI indexing](soci-indexing-setup.md)
+ [Create SOCI indexes with nerdctl and SOCI CLI example](soci-indexing-example-create-indexes.md)
+ [Integrate SOCI-indexed images with Studio example](soci-indexing-example-integrate-studio.md)

# Permissions for SOCI indexing
<a name="soci-indexing-setup"></a>

Create SOCI indexes for your container images and store them in Amazon ECR before using SOCI indexing with [Amazon SageMaker Studio](studio-updated.md) or [Amazon SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html).

**Topics**
+ [Prerequisites](#soci-indexing-setup-prerequisites)
+ [Required IAM permissions](#soci-indexing-setup-iam-permissions)

## Prerequisites
<a name="soci-indexing-setup-prerequisites"></a>
+ AWS account with an [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html) (IAM) role with permissions to manage
  + [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
  + [Amazon SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/gs.html)
+ [Amazon ECR private repositories](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html) for storing your container images
+ [AWS CLI v2.0\$1](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) configured with appropriate credentials
+ The following container tools:
  + Required: [soci-snapshotter](https://github.com/awslabs/soci-snapshotter)
  + Options:
    + [nerdctl](https://github.com/containerd/nerdctl)
    + [finch](https://github.com/runfinch/finch)

## Required IAM permissions
<a name="soci-indexing-setup-iam-permissions"></a>

Your IAM role needs permissions to:
+ Create and manage SageMaker AI resources (domains, images, app configs).
  + You may use the [SageMakerFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSageMakerFullAccess.html) AWS managed policy. For more permission details, see [AWS managed policy: AmazonSageMakerFullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonSageMakerFullAccess).
+ [IAM permissions for pushing an image to an Amazon ECR private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-push-iam.html).

# Create SOCI indexes with nerdctl and SOCI CLI example
<a name="soci-indexing-example-create-indexes"></a>

The following page provides an example on how to create SOCI indexes with nerdctl and SOCI CLI.

**Create SOCI indexes example**

1. First set your variables for the AWS CLI commands that follow. The following is an example of setting up your variables.

   ```
   ACCOUNT_ID="111122223333"
   REGION="us-east-1"
   REPOSITORY_NAME="repository-name"
   ORIGINAL_IMAGE_TAG="original-image-tag"
   SOCI_IMAGE_TAG="soci-indexed-image-tag"
   ```

   Variable definitions:
   + `ACCOUNT_ID` is your AWS account ID
   + `REGION` is the AWS Region of your Amazon ECR private registry
   + `REPOSITORY_NAME` is the name of your Amazon ECR private registry
   + `ORIGINAL_IMAGE_TAG` is the tag of your original image
   + `SOCI_IMAGE_TAG` is the tag of your SOCI-indexed image

1. Install required tools:

   ```
   # Install SOCI CLI, containerd, and nerdctl
   sudo yum install soci-snapshotter
   sudo yum install containerd jq  
   sudo systemctl start soci-snapshotter
   sudo systemctl restart containerd
   sudo yum install nerdctl
   ```

1. Set your registry variables:

   ```
   REGISTRY_USER=AWS
   REGISTRY="$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com"
   ```

1. Export your region and authenticate to Amazon ECR:

   ```
   export AWS_REGION=$REGION
   REGISTRY_PASSWORD=$(/usr/local/bin/aws ecr get-login-password --region $AWS_REGION)
   echo $REGISTRY_PASSWORD | sudo nerdctl login -u $REGISTRY_USER --password-stdin $REGISTRY
   ```

1. Pull your original container image:

   ```
   sudo nerdctl pull $REGISTRY/$REPOSITORY_NAME:$ORIGINAL_IMAGE_TAG
   ```

1. Create the SOCI index:

   ```
   sudo nerdctl image convert --soci $REGISTRY/$REPOSITORY_NAME:$ORIGINAL_IMAGE_TAG $REGISTRY/$REPOSITORY_NAME:$SOCI_IMAGE_TAG
   ```

1. Push the SOCI-indexed image:

   ```
   sudo nerdctl push --platform linux/amd64 $REGISTRY/$REPOSITORY_NAME:$SOCI_IMAGE_TAG
   ```

This process creates two artifacts for the original container image in your ECR repository:
+ SOCI index - Metadata enabling lazy loading
+ Image Index manifest - OCI-compliant manifest

# Integrate SOCI-indexed images with Studio example
<a name="soci-indexing-example-integrate-studio"></a>

You must reference the SOCI-indexed image tag to use SOCI-indexed images in Studio, rather than the original container image tag. Use the tag you specified during the SOCI conversion process (e.g., `SOCI_IMAGE_TAG` in the [Create SOCI indexes with nerdctl and SOCI CLI example](soci-indexing-example-create-indexes.md)).

**Integrate SOCI-indexed images example**

1. First set your variables for the AWS CLI commands that follow. The following is an example of setting up your variables.

   ```
   ACCOUNT_ID="111122223333"
   REGION="us-east-1"
   IMAGE_NAME="sagemaker-image-name"
   IMAGE_CONFIG_NAME="sagemaker-image-config-name"
   ROLE_ARN="your-role-arn"
   DOMAIN_ID="domain-id"
   SOCI_IMAGE_TAG="soci-indexed-image-tag"
   ```

   Variable definitions:
   + `ACCOUNT_ID` is your AWS account ID
   + `REGION` is the AWS Region of your Amazon ECR private registry
   + `IMAGE_NAME` is the name of your SageMaker image
   + `IMAGE_CONFIG_NAME` is the name of your SageMaker image configuration
   + `ROLE_ARN` is the ARN of your execution role with the permissions listed in [Required IAM permissions](soci-indexing-setup.md#soci-indexing-setup-iam-permissions)
   + `DOMAIN_ID` is the [domain ID](https://docs.aws.amazon.com/sagemaker/latest/dg/domain-view.html)
**Note**  
If you are attaching the image to a SageMaker Unified Studio project and you need clarification on which domain to use, see [View the SageMaker AI domain details associated with your project](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/view-project-details.html#view-project-details-smai-domain).
   + `SOCI_IMAGE_TAG` is the tag of your SOCI-indexed image

1. Export your region:

   ```
   export AWS_REGION=$REGION
   ```

1. Create a SageMaker image:

   ```
   aws sagemaker create-image \
       --image-name "$IMAGE_NAME" \
       --role-arn "$ROLE_ARN"
   ```

1. Create a SageMaker Image Version using your SOCI index URI:

   ```
   IMAGE_INDEX_URI="$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$IMAGE_NAME:$SOCI_IMAGE_TAG"
   
   aws sagemaker create-image-version \
       --image-name "$IMAGE_NAME" \
       --base-image "$IMAGE_INDEX_URI"
   ```

1. Create an application image configuration and update your Amazon SageMaker AI domain to include the custom image for your app. You can do this for Code Editor, based on Code-OSS, Visual Studio Code - Open Source (Code Editor) and JupyterLab applications. Choose the application option below to view the steps.

------
#### [ Code Editor ]

   Create an application image configuration for Code Editor:

   ```
   aws sagemaker create-app-image-config \
       --app-image-config-name "$IMAGE_CONFIG_NAME" \
       --code-editor-app-image-config '{ "FileSystemConfig": { "MountPath": "/home/sagemaker-user", "DefaultUid": 1000, "DefaultGid": 100 } }'
   ```

   Update your Amazon SageMaker AI domain to include the custom image for Code Editor:

   ```
   aws sagemaker update-domain \
       --domain-id "$DOMAIN_ID" \
       --default-user-settings '{
           "CodeEditorAppSettings": {
           "CustomImages": [{
               "ImageName": "$IMAGE_NAME", 
               "AppImageConfigName": "$IMAGE_CONFIG_NAME"
           }]
       }
   }'
   ```

------
#### [ JupyterLab ]

   Create an application image configuration for JupyterLab:

   ```
   aws sagemaker create-app-image-config \
       --app-image-config-name "$IMAGE_CONFIG_NAME" \
       --jupyter-lab-app-image-config '{ "FileSystemConfig": { "MountPath": "/home/sagemaker-user", "DefaultUid": 1000, "DefaultGid": 100 } }'
   ```

   Update your Amazon SageMaker AI domain to include the custom image for JupyterLab:

   ```
   aws sagemaker update-domain \
       --domain-id "$DOMAIN_ID" \
       --default-user-settings '{
           "JupyterLabAppSettings": {
           "CustomImages": [{
               "ImageName": "$IMAGE_NAME", 
               "AppImageConfigName": "$IMAGE_CONFIG_NAME"
           }]
       }
   }'
   ```

------

1. After you update your domain to include your custom image, you can create an application in Studio using your custom image. When you [Launch a custom image in Studio](studio-updated-byoi-how-to-launch.md) ensure that you are using your custom image.

# Detach and clean up custom image resources
<a name="studio-updated-byoi-how-to-detach-from-domain"></a>

The following page provides instructions on how to detach your custom images and clean up the related resources using the Amazon SageMaker AI console or the AWS Command Line Interface (AWS CLI). 

**Important**  
You must first detach your custom image from your domain before deleting the image from the SageMaker AI image store. If not, you may experience errors while viewing your domain information or attaching new custom images to your domain.   
If you are experiencing an error loading a custom image, see [Failure to load custom image](studio-updated-troubleshooting.md#studio-updated-troubleshooting-custom-image). 

## Detach and delete custom images (console)
<a name="studio-updated-byoi-how-to-detach-from-domain-console"></a>

The following provides instructions on how to detach your custom images from SageMaker AI and clean up your custom image resources using the console.

**Detach your custom image from your domain**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Expand the **Admin configurations** section.

1. Under **Admin configurations**, choose **Domains**.

1. From the list of **domains**, select a domain.

1. Open the **Environment** tab.

1. For **Custom images for personal Studio apps**, select the checkboxes for the images you want to detach.

1. Choose **Detach**.

1. Follow the instructions to detach.

**Delete your custom image**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Expand the **Admin configurations** section, if not already done so.

1. Under **Admin configurations**, choose **Images**.

1. From the list of **Images**, select an image you would like to delete.

1. Choose **Delete**.

1. Follow the instructions to delete your image and all its versions from SageMaker AI.

**Delete your custom images and repository from Amazon ECR**
**Important**  
This will also delete any container images and artifacts in this repository.

1. Open the [Amazon ECR console](https://console.aws.amazon.com/ecr).

1. If not already done so, expand the left navigation pane.

1. Under **Private registry**, choose **Repositories**.

1. Select the repositories you wish to delete.

1. Choose **Delete**.

1. Follow the instructions to delete.

## Detach and delete custom images (AWS CLI)
<a name="studio-updated-byoi-how-to-detach-from-domain-cli"></a>

The following section shows an example on how to detach your custom images using the AWS CLI.

1. First set your variables for the AWS CLI commands that follow.

   ```
   ACCOUNT_ID=account-id
   REGION=aws-region
   APP_IMAGE_CONFIG=app-image-config
   SAGEMAKER_IMAGE_NAME=custom-image-name
   ```
   + `aws-region` is the AWS Region of your Amazon SageMaker AI domain. You can find this at the top right of any AWS console page. 
   + `app-image-config` is the name of your application image configuration. Use the following AWS CLI command to list the application image configurations in your AWS Region.

     ```
     aws sagemaker list-app-image-configs \
            --region ${REGION}
     ```
   + `custom-image-name` is the custom image name. Use the following AWS CLI command to list the images in your AWS Region.

     ```
     aws sagemaker list-images \
            --region ${REGION}
     ```

1. To detach the image and image versions from your domain using these instructions, you will need to create or update a domain configuration json file.
**Note**  
If you followed the instructions in [Attach your custom image to your domain](studio-updated-byoi-how-to-attach-to-domain.md), you may have updated your domain using the file named `update-domain.json`.   
If you do not have that file, you can create a new json file instead.

   Create a file named `update-domain.json` that you will use to update your domain.

1. To delete the custom images, you will need to leave `CustomImages` blank, such that `"CustomImages": []`. Choose one of the following to view example configuration files for Code Editor or JupyterLab.

------
#### [ Code Editor: update domain configuration file example ]

   A configuration file example for Code Editor, using [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CodeEditorAppSettings.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CodeEditorAppSettings.html).

   ```
   {
       "DomainId": "domain-id",
       "DefaultUserSettings": {
           "CodeEditorAppSettings": {
               "CustomImages": [
               ]
           }
       }
   }
   ```

------
#### [ JupyterLab: update domain configuration file example ]

   A configuration file example for JupyterLab, using [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_JupyterLabAppSettings.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_JupyterLabAppSettings.html).

   ```
   {
       "DomainId": "domain-id",
       "DefaultUserSettings": {
           "JupyterLabAppSettings": {
               "CustomImages": [
               ]
           }
       }
   }
   ```

------

   `domain-id` is the domain ID that your image is attached to. Use the following command to list your domains.

   ```
   aws sagemaker list-domains \
         --region ${REGION}
   ```

1. Save the file.

1. Call the [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html) AWS CLI using the update domain configuration file, `update-domain.json`.
**Note**  
Before you can update the custom images, you must delete all of the **applications** in your domain. You **do not** need to delete user profiles or shared spaces. For instructions on deleting applications, choose one of the following options.  
If you want to use the SageMaker AI console, see [Shut down SageMaker AI resources in your domain](sm-console-domain-resources-shut-down.md).
If you want to use the AWS CLI, use steps 1 through 3 of [Delete an Amazon SageMaker AI domain (AWS CLI)](gs-studio-delete-domain.md#gs-studio-delete-domain-cli).

   ```
   aws sagemaker update-domain \
       --cli-input-json file://update-domain.json \
       --region ${REGION}
   ```

1. Delete the app image config.

   ```
   aws sagemaker delete-app-image-config \
       --app-image-config-name ${APP_IMAGE_CONFIG}
   ```

1. Delete the custom image. This also deletes all of the image versions. This does not delete the Amazon ECR container image and image versions. To do so, use the optional steps below.

   ```
   aws sagemaker delete-image \
       --image-name ${SAGEMAKER_IMAGE_NAME}
   ```

1. (Optional) Delete your Amazon ECR resources. The following list provides AWS CLI commands to obtain your Amazon ECR resource information for the steps below.

   1. Set your variables for the AWS CLI commands that follow.

      ```
      ECR_REPO_NAME=ecr-repository-name
      ```

      `ecr-repository-name` is the name of your Amazon Elastic Container Registry repository. 

      To list the details of your repositories, use the following command.

      ```
      aws ecr describe-repositories \
              --region ${REGION}
      ```

   1. Delete your repository from Amazon ECR. 
**Important**  
This will also delete any container images and artifacts in this repository.

      ```
      aws ecr delete-repository \
            --repository-name ${ECR_REPO_NAME} \
            --force \
            --region ${REGION}
      ```

# Lifecycle configurations within Amazon SageMaker Studio
<a name="studio-lifecycle-configurations"></a>

Lifecycle configurations (LCCs) are scripts that administrators and users can use to automate the customization of the following applications within your Amazon SageMaker Studio environment:
+ Amazon SageMaker AI JupyterLab
+ Code Editor, based on Code-OSS, Visual Studio Code - Open Source
+ Studio Classic
+ Notebook instance

Customizing your application includes:
+ Installing custom packages
+ Configuring extensions
+ Preloading datasets
+ Setting up source code repositories

Users create and attach built-in lifecycle configurations to their own user profiles. Administrators create and attach default or built-in lifecycle configurations at the domain, space, or user profile level.

**Important**  
Amazon SageMaker Studio first runs the built-in lifecycle configuration and then runs the default LCC. Amazon SageMaker AI won't resolve package conflicts between the user and administrator LCCs. For example, if the built-in LCC installs `python3.11` and the default LCC installs `python3.12`, Studio installs `python3.12`. 

# Create and attach lifecycle configurations
<a name="studio-lifecycle-configurations-create"></a>

You can create and attach lifecycle configurations using either the AWS Management Console or the AWS Command Line Interface.

**Topics**
+ [Create and attach lifecycle configurations (AWS CLI)](#studio-lifecycle-configurations-create-cli)
+ [Create and attach lifecycle configurations (console)](#studio-lifecycle-configurations-create-console)

## Create and attach lifecycle configurations (AWS CLI)
<a name="studio-lifecycle-configurations-create-cli"></a>

**Important**  
Before you begin, complete the following prerequisites:   
Update the AWS CLI by following the steps in [Installing the current AWS CLI Version](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html#install-tool-bundled).
From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html). 
Onboard to Amazon SageMaker AI domain. For conceptual information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md). For a quickstart guide, see [Use quick setup for Amazon SageMaker AI](onboard-quick-start.md).

The following procedure shows how to create a lifecycle configuration script that prints `Hello World` within Code Editor or JupyterLab.

**Note**  
Each script can have up to **16,384 characters**.

1. From your local machine, create a file named `my-script.sh` with the following content:

   ```
   #!/bin/bash
   set -eux
   echo 'Hello World!'
   ```

1. Use the following to convert your `my-script.sh` file into base64 format. This requirement prevents errors that occur from spacing and line break encoding.

   ```
   LCC_CONTENT=`openssl base64 -A -in my-script.sh`
   ```

1. Create a lifecycle configuration for use with Studio. The following command creates a lifecycle configuration that runs when you launch an associated `JupyterLab` application:

   ```
   aws sagemaker create-studio-lifecycle-config \
   --region region \
   --studio-lifecycle-config-name my-lcc \
   --studio-lifecycle-config-content $LCC_CONTENT \
   --studio-lifecycle-config-app-type application-type
   ```

   For `studio-lifecycle-config-app-type`, specify either *CodeEditor* or *JupyterLab*.
**Note**  
The ARN of the newly created lifecycle configuration that is returned. This ARN is required to attach the lifecycle configuration to your application.

To ensure that the environments are customized properly, users and administrators use different commands to attach lifecycle configurations.

### Attach default lifecycle configurations (administrator)
<a name="studio-lifecycle-configurations-attach-cli-administrator"></a>

To attach the lifecycle configuration, you must update the `UserSettings` for your domain or user profile. Lifecycle configuration scripts that are associated at the domain level are inherited by all users. However, scripts that are associated at the user profile level are scoped to a specific user. 

You can create a new user profile, domain, or space with a lifecycle configuration attached by using the following commands:
+ [create-user-profile](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-user-profile.html)
+ [create-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-domain.html)
+ [create-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-space.html)

The following command creates a user profile with a lifecycle configuration for a JupyterLab application. Add the lifecycle configuration ARN from the preceding step to the `JupyterLabAppSettings` of the user. You can add multiple lifecycle configurations at the same time by passing a list of them. When a user launches a JupyterLab application with the AWS CLI, they can specify a lifecycle configuration instead of using the default one. The lifecycle configuration that the user passes must belong to the list of lifecycle configurations in `JupyterLabAppSettings`.

```
# Create a new UserProfile
aws sagemaker create-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"JupyterLabAppSettings": {
  "LifecycleConfigArns":
    [lifecycle-configuration-arn-list]
  }
}'
```

The following command creates a user profile with a lifecycle configuration for a Code Editor application. Add the lifecycle configuration ARN from the preceding step to the `CodeEditorAppSettings` of the user. You can add multiple lifecycle configurations at the same time by passing a list of them. When a user launches a Code Editor application with the AWS CLI, they can specify a lifecycle configuration instead of using the default one. The lifecycle configuration that the user passes must belong to the list of lifecycle configurations in `CodeEditorAppSettings`.

```
# Create a new UserProfile
aws sagemaker create-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"CodeEditorAppSettings": {
  "LifecycleConfigArns":
    [lifecycle-configuration-arn-list]
  }
}'
```

### Attach built-in lifecycle configurations (user)
<a name="studio-lifecycle-configurations-attach-cli-user"></a>

To attach the lifecycle configuration, you must update the `UserSettings` for your user profile.

The following command creates a user profile with a lifecycle configuration for a JupyterLab application. Add the lifecycle configuration ARN from the preceding step to the `JupyterLabAppSettings` of your user profile.

```
# Update a UserProfile
aws sagemaker update-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"JupyterLabAppSettings": {
  "BuiltInLifecycleConfigArn":"lifecycle-configuration-arn"
  }
}'
```

The following command creates a user profile with a lifecycle configuration for a Code Editor application. Add the lifecycle configuration ARN from the preceding step to the `CodeEditorAppSettings` of your user profile. The lifecycle configuration that the user passes must belong to the list of lifecycle configurations in `CodeEditorAppSettings`.

```
# Update a UserProfile
aws sagemaker update-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"CodeEditorAppSettings": {
  "BuiltInLifecycleConfigArn":"lifecycle-configuration-arn"
  }
}'
```

## Create and attach lifecycle configurations (console)
<a name="studio-lifecycle-configurations-create-console"></a>

To create and attach lifecycle configurations in the AWS Management Console, navigate to the [Amazon SageMaker AI console](https://console.aws.amazon.com/sagemaker) and choose **Lifecycle configurations** in the left-hand navigation. The console will guide you through the process of creating the lifecycle configuration.

# Debug lifecycle configurations
<a name="studio-lifecycle-configurations-debug"></a>

The following topics show how to get information about and debug your lifecycle configurations.

**Topics**
+ [Verify lifecycle configuration process from CloudWatch Logs](#studio-lifecycle-configurations-debug-logs)
+ [Lifecycle configuration timeout](studio-lifecycle-configurations-debug-timeout.md)

## Verify lifecycle configuration process from CloudWatch Logs
<a name="studio-lifecycle-configurations-debug-logs"></a>

Lifecycle configurations only log `STDOUT` and `STDERR`.

`STDOUT` is the default output for bash scripts. You can write to `STDERR` by appending `>&2` to the end of a bash command. For example, `echo 'hello'>&2`. 

Logs for your lifecycle configurations are published to your AWS account using Amazon CloudWatch. These logs can be found in the `/aws/sagemaker/studio` log stream in the CloudWatch console.

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Choose **Logs** from the left navigation pane. From the dropdown menu, select **Log groups**.

1. On the **Log groups** page, search for `aws/sagemaker/studio`. 

1. Select the log group.

1. On the **Log group details** page, choose the **Log streams** tab.

1. To find the logs for a specific space and app, search the log streams using the following format:

   ```
   domain-id/space-name/app-type/default/LifecycleConfigOnStart
   ```

   For example, to find the lifecycle configuration logs for domain ID `d-m85lcu8vbqmz`, space name `i-sonic-js`, and application type `JupyterLab`, use the following search string:

   ```
   d-m85lcu8vbqmz/i-sonic-js/JupyterLab/default/LifecycleConfigOnStart
   ```

1. To view the script execution logs, select the log stream appended with `LifecycleConfigOnStart`.

# Lifecycle configuration timeout
<a name="studio-lifecycle-configurations-debug-timeout"></a>

There is a lifecycle configuration timeout limitation of 5 minutes. If a lifecycle configuration script takes longer than 5 minutes to run, you get an error.

To resolve this error, make sure that your lifecycle configuration script completes in less than 5 minutes. 

To help decrease the runtime of scripts, try the following:
+ Reduce unnecessary steps. For example, limit which conda environments to install large packages in.
+ Run tasks in parallel processes.
+ Use the nohup command in your script to make sure that hangup signals are ignored so that the script runs without stopping.

# Amazon SageMaker Studio spaces
<a name="studio-updated-spaces"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

Spaces are used to manage the storage and resource needs of some Amazon SageMaker Studio applications. Each space is composed of multiple resources and can be either private or shared. Each space has a 1:1 relationship with an instance of an application. Every supported application that is created gets its own space. The following applications in Studio run on spaces: 
+  [Code Editor in Amazon SageMaker Studio](code-editor.md)
+  [SageMaker JupyterLab](studio-updated-jl.md) 
+  [Amazon SageMaker Studio Classic](studio.md) 

A space is composed of the following resources: 
+ A storage volume. 
  + For Studio Classic, the space is connected to the shared Amazon Elastic File System (Amazon EFS) volume for the domain. 
  + For other applications, a distinct Amazon Elastic Block Store (Amazon EBS) volume is attached to the space. All applications are given their own Amazon EBS volume. Applications do not have access to the Amazon EBS volume of other applications. For more information about Amazon EBS volumes, see [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/AmazonEBS.html). 
+ The application type of the space. 
+ The image that the application is based on.

Spaces can be either private or shared:
+  **Private**: Private spaces are scoped to a single user in a domain. Private spaces cannot be shared with other users. All applications that support spaces also support private spaces. 
+  **Shared**: Shared spaces are accessible by all users in the domain. For more information about shared spaces, see [Collaboration with shared spaces](domain-space.md). 

Spaces can be created in domains that use either AWS IAM Identity Center or AWS Identity and Access Management (IAM) authentication. The following sections give general information about how to access spaces. For specific information about creating and accessing a space, see the documentation for the respective application type of the space that you're creating. 

For information about viewing, stopping, or deleting your applications, instances, or spaces, see [Stop and delete your Studio running applications and spaces](studio-updated-running-stop.md).

**Topics**
+ [Launch spaces](studio-updated-spaces-access.md)
+ [Collaboration with shared spaces](domain-space.md)

# Launch spaces
<a name="studio-updated-spaces-access"></a>

The following sections give information about accessing spaces in a domain. Spaces can be accessed in one of the following ways:
+ from the Amazon SageMaker AI console
+ from Studio
+ using the AWS CLI

## Accessing spaces from the Amazon SageMaker AI console
<a name="studio-updated-spaces-access-console"></a>

**To access spaces from the Amazon SageMaker AI console**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Under **Admin configurations**, choose **Domains**.

1. From the list of domains, select the domain that contains the spaces.

1. On the **Domain details** page, select the **Space management** tab. For more information about managing spaces, see [Collaboration with shared spaces](domain-space.md).

1. From the list of spaces for that domain, select the space to launch.

1. Choose **Launch Studio** for the space that you want to launch.

## Accessing spaces from Studio
<a name="studio-updated-spaces-access-updated"></a>

Follow these steps to access spaces from Studio for a specific application type. 

**To access spaces from Studio**

1. Open Studio by following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1. Select the application type with spaces that you want to access.

## Accessing spaces using the AWS CLI
<a name="studio-updated-spaces-access-cli"></a>

The following sections show how to access a space from the AWS Command Line Interface (AWS CLI). The procedures are for domains that use AWS Identity and Access Management (IAM) or AWS IAM Identity Center authentication. 

### IAM authentication
<a name="studio-updated-spaces-access-cli-iam"></a>

The following procedure outlines generally how to access a space using IAM authentication from the AWS CLI. 

1. Create a presigned domain URL specifying the name of the space that you want to access.

   ```
   aws \
       --region region \
       sagemaker \
       create-presigned-domain-url \
       --domain-id domain-id \
       --user-profile-name user-profile-name \
       --space-name space-name
   ```

1. Navigate to the URL. 

### Accessing a space in IAM Identity Center authentication
<a name="studio-updated-spaces-access-identity-center"></a>

The following procedure outlines how to access a space using IAM Identity Center authentication from the AWS CLI. 

1. Use the following command to return the URL associated with the space.

   ```
   aws \
       --region region \
       sagemaker \
       describe-space \
       --domain-id domain-id \
       --space-name space-name
   ```

1. Append the respective redirect parameter for the application type to the URL to be federated through IAM Identity Center. For more information about the redirect parameters, see [describe-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/describe-space.html). 

1. Navigate to the URL to be federated through IAM Identity Center. 

# Collaboration with shared spaces
<a name="domain-space"></a>

An Amazon SageMaker Studio Classic shared space consists of a shared JupyterServer application and a shared directory. A JupyterLab shared space consists of a shared JupyterLab application and a shared directory within Amazon SageMaker Studio. All user profiles in a domain have access to all shared spaces in the domain. Amazon SageMaker AI automatically scopes resources in a shared space within the context of the Amazon SageMaker Studio Classic application that you launch in that shared space. Resources in a shared space include notebooks, files, experiments, and models. Use shared spaces to collaborate with other users in real-time using features like automatic tagging, real time co-editing of notebooks, and customization. 

Shared spaces are available in:
+ Amazon SageMaker Studio Classic
+ JupyterLab

A Studio Classic shared space only supports Studio Classic and KernelGateway applications. A shared space only supports the use of a JupyterLab 3 image Amazon Resource Name (ARN). For more information, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md).

 Amazon SageMaker AI automatically tags all SageMaker AI resources that you create within the scope of a shared space. You can use these tags to monitor costs and plan budgets using tools, such as AWS Budgets. 

A shared space uses the same VPC settings as the domain that it's created in. 

**Note**  
 Shared spaces do not support the use of Amazon SageMaker Data Wrangler or Amazon EMR cross-account clusters. 

 **Automatic tagging** 

 All resources created in a shared space are automatically tagged with a domain ARN tag and shared space ARN tag. The domain ARN tag is based on the domain ID, while the shared space ARN tag is based on the shared space name. 

 You can use these tags to monitor AWS CloudTrail usage. For more information, see [Log Amazon SageMaker API Calls with AWS CloudTrail](https://docs.aws.amazon.com//sagemaker/latest/dg/logging-using-cloudtrail.html). 

 You can also use these tags to monitor costs with AWS Billing and Cost Management. For more information, see [Using AWS cost allocation tags](https://docs.aws.amazon.com//awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). 

 **Real time co-editing of notebooks** 

 A key benefit of a shared space is that it facilitates collaboration between members of the shared space in real time. Users collaborating in a workspace get access to a shared Studio Classic application where they can access, read, and edit their notebooks in real time. Real time collaboration is only supported for JupyterServer applications within a shared space. 

 Users with access to a shared space can simultaneously open, view, edit, and execute Jupyter notebooks in the shared Studio Classic or JupyterLab application in that space. 

The notebook indicates each co-editing user with a different cursor that shows the user profile name. While multiple users can view the same notebook, co-editing is best suited for small groups of two to five users.

To track changes being made by multiple users, we strongly recommended using Studio Classic's built-in Git-based version control.

 **JupyterServer 2** 

To use shared spaces in Studio Classic, Jupyter Server version 2 is required. Certain JupyterLab extensions and packages can forcefully downgrade Jupyter Server to version 1. This prevents the use of shared space. Run the following from the command prompt to change the version number and continue using shared spaces.

```
conda activate studio
pip install jupyter-server==2.0.0rc3
```

 **Customize a shared space** 

To attach a lifecycle configuration or custom image to a shared space, you must use the AWS CLI. For more information about creating and attaching lifecycle configurations, see [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md). For more information about creating and attaching custom images, see [Custom Images in Amazon SageMaker Studio Classic](studio-byoi.md).

# Create a shared space
<a name="domain-space-create"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

 The following topic demonstrates how to create a shared space in an existing Amazon SageMaker AI domain. If you created your domain without support for shared spaces, you must add support for shared spaces to your existing domain before you can create a shared space. 

**Topics**
+ [Add shared space support to an existing domain](#domain-space-add)
+ [Create a shared space](#domain-space-create-app)

## Add shared space support to an existing domain
<a name="domain-space-add"></a>

 You can use the SageMaker AI console or the AWS CLI to add support for shared spaces to an existing domain. If the domain is using `VPC only` network access, then you can only add shared space support using the AWS CLI.

### Console
<a name="domain-space-add-console"></a>

 Complete the following procedure to add support for Studio Classic shared spaces to an existing domain from the SageMaker AI console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1.  From the list of domains, select the domain that you want to open the **domain settings** page for. 

1.  On the **domain details** page, choose the **domain settings** tab. 

1.  Choose **Edit**. 

1.  For **Space default execution role**, set an IAM role that is used by default for all shared spaces created in the domain. 

1.  Choose **Next**. 

1.  Choose **Next**. 

1.  Choose **Next**. 

1.  Choose **Submit**. 

### AWS CLI
<a name="domain-space-add-cli"></a>

------
#### [ Studio Classic ]

Run the following command from the terminal of your local machine to add default shared space settings to a domain from the AWS CLI. If you are adding default shared space settings to a domain within an Amazon VPC, you must also include a list of security groups. Studio Classic shared spaces only support the use of JupyterLab 3 image ARNs. For more information, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md).

```
# Public Internet domain
aws --region region \
sagemaker update-domain \
--domain-id domain-id \
--default-space-settings "ExecutionRole=execution-role-arn,JupyterServerAppSettings={DefaultResourceSpec={InstanceType=example-instance-type,SageMakerImageArn=sagemaker-image-arn}}"

# VPCOnly domain
aws --region region \
sagemaker update-domain \
--domain-id domain-id \
--default-space-settings "ExecutionRole=execution-role-arn,JupyterServerAppSettings={DefaultResourceSpec={InstanceType=system,SageMakerImageArn=sagemaker-image-arn}},SecurityGroups=[security-groups]"
```

Use the following command to verify that the default shared space settings have been updated. 

```
aws --region region \
sagemaker describe-domain \
--domain-id domain-id
```

------
#### [ JupyterLab ]

Run the following command from the terminal of your local machine to add default shared space settings to a domain from the AWS CLI. If you are adding default shared space settings to a domain within an Amazon VPC, you must also include a list of security groups. Studio Classic shared spaces only support the use of JupyterLab 4 image ARNs. For more information, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md).

```
# Public Internet domain
aws --region region \
sagemaker update-domain \
--domain-id domain-id \
--default-space-settings "ExecutionRole=execution-role-arn", JupyterLabAppSettings={DefaultResourceSpec={InstanceType=example-instance-type,SageMakerImageArn=sagemaker-image-arn}}"

# VPCOnly domain
aws --region region \
sagemaker update-domain \
--domain-id domain-id \
--default-space-settings "ExecutionRole=execution-role-arn, SecurityGroups=[security-groups]"
```

Use the following command to verify that the default shared space settings have been updated. 

```
aws --region region \
sagemaker describe-domain \
--domain-id domain-id
```

------

## Create a shared space
<a name="domain-space-create-app"></a>

The following sections demonstrate how to create a shared space from the Amazon SageMaker AI console, Amazon SageMaker Studio, or the AWS CLI.

### Create from Studio
<a name="domain-space-create-updated"></a>

Use the following procedures to create a shared space in a domain from Studio.

------
#### [ Studio Classic ]

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **Studio Classic**.

1. Choose **Create Studio Classic space**

1. In the pop up window, enter a name for the space.

1. Choose **Create space**.

------
#### [ JupyterLab ]

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **JupyterLab**.

1. Choose **Create JupyterLab space**

1. In the pop up window, enter a name for the space.

1. Choose **Create space**.

------

### Create from the console
<a name="domain-space-create-console"></a>

 Complete the following procedure to create a shared space in a domain from the SageMaker AI console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1.  From the list of domains, select the domain that you want to create a shared space for. 

1.  On the **domain details** page, choose the **Space management** tab. 

1.  Choose **Create**. 

1.  Enter a name for your shared space. shared space names within a domain must be unique. The execution role for the shared space is set to the domain IAM execution role. 

### Create from AWS CLI
<a name="domain-space-create-cli"></a>

This section shows how to create a shared space from the AWS CLI. 

You cannot set the execution role of a shared space when creating or updating it. The `DefaultDomainExecRole` can only be set when creating or updating the domain. shared spaces only support the use of JupyterLab 3 image ARNs. For more information, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md).

To create a shared space from the AWS CLI, run one of the following commands from the terminal of your local machine.

------
#### [ Studio Classic ]

```
aws --region region \
sagemaker create-space \
--domain-id domain-id \
--space-name space-name \
--space-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "sagemaker-image-arn",
      "InstanceType": "system"
    }
  }
}'
```

------
#### [ JupyterLab ]

```
aws --region region \
sagemaker create-space \
--domain-id domain-id \
--space-name space-name \
--ownership-settings "{\"OwnerUserProfileName\": \"user-profile-name\"}" \
--space-sharing-settings "{\"SharingType\": \"Shared\"}" \
--space-settings "{\"AppType\": \"JupyterLab\"}"
```

------

# Get information about shared spaces
<a name="domain-space-list"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

 This guide shows how to access a list of shared spaces in an Amazon SageMaker AI domain with the Amazon SageMaker AI console, Amazon SageMaker Studio, or the AWS CLI. It also shows how to view details of a shared space from the AWS CLI. 

**Topics**
+ [List shared spaces](#domain-space-list-spaces)
+ [View shared space details](#domain-space-describe)

## List shared spaces
<a name="domain-space-list-spaces"></a>

 The following topic describes how to view a list of shared spaces within a domain from the SageMaker AI console or the AWS CLI. 

### List shared spaces from Studio
<a name="domain-space-list-updated"></a>

 Complete the following procedure to view a list of the shared spaces in a domain from Studio.

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **Studio Classic** or **JupyterLab**. You can view the spaces that are being used to run the application type.

### List shared spaces from the console
<a name="domain-space-list-console"></a>

 Complete the following procedure to view a list of the shared spaces in a domain from the SageMaker AI console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1.  From the list of domains, select the domain that you want to view the list of shared spaces for. 

1.  On the **domain details** page, choose the **Space management** tab. 

### List shared spaces from the AWS CLI
<a name="domain-space-list-cli"></a>

 To list the shared spaces in a domain from the AWS CLI, run the following command from the terminal of your local machine.

```
aws --region region \
sagemaker list-spaces \
--domain-id domain-id
```

## View shared space details
<a name="domain-space-describe"></a>

 The following section describes how to view shared space details from the SageMaker AI console, Studio, or the AWS CLI. 

### View shared spaces details from Studio
<a name="domain-space-describe-updated"></a>

 Complete the following procedure to view the details of a shared spaces in a domain from Studio.

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **Studio Classic** or **JupyterLab**. You can view the spaces that are running the application.

1. Select the name of the space that you want to view more details for.

### View shared space details from the console
<a name="domain-space-describe-console"></a>

 You can view the details of a shared space from the SageMaker AI console using the following procedure. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1.  From the list of domains, select the domain that you want to view the list of shared spaces for. 

1.  On the **domain details** page, choose the **Space management** tab. 

1.  Select the name of the space to open a new page that lists details about the shared space. 

### View shared space details from the AWS CLI
<a name="domain-space-describe-cli"></a>

To view the details of a shared space from the AWS CLI, run the following command from the terminal of your local machine.

```
aws --region region \
sagemaker describe-space \
--domain-id domain-id \
--space-name space-name
```

# Edit a shared space
<a name="domain-space-edit"></a>

 You can only edit the details for an Amazon SageMaker Studio Classic or JupyterLab shared space using the AWS CLI. You can't edit the details of a shared space from the Amazon SageMaker AI console. You can only update workspace attributes when there are no running applications in the shared space. 

------
#### [ Studio Classic ]

To edit the details of a Studio Classic shared space from the AWS CLI, run the following one of the following commands from the terminal of your local machine. shared spaces only support the use of JupyterLab 3 image ARNs. For more information, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md).

```
aws --region region \
sagemaker update-space \
--domain-id domain-id \
--space-name space-name \
--query SpaceArn --output text \
--space-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "sagemaker-image-arn",
      "InstanceType": "system"
    }
  }
}'
```

------
#### [ JupyterLab ]

To edit the details of a JupyterLab shared space from the AWS CLI, run the following one of the following commands from the terminal of your local machine. shared spaces only support the use of JupyterLab 4 image ARNs. For more information, see [SageMaker JupyterLab](studio-updated-jl.md).

```
aws --region region \
sagemaker update-space \
--domain-id domain-id \
--space-name space-name \
--space-settings "{
      "SpaceStorageSettings": {
      "EbsStorageSettings": { 
      "EbsVolumeSizeInGb":100
    }
    }
  }
}"
```

------

# Delete a shared space
<a name="domain-space-delete"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

 The following topic shows how to delete an Amazon SageMaker Studio Classic shared space from the Amazon SageMaker AI console or AWS CLI. A shared space can only be deleted if it has no running applications. 

**Topics**
+ [Console](#domain-space-delete-console)
+ [AWS CLI](#domain-space-delete-cli)

## Console
<a name="domain-space-delete-console"></a>

 Complete the following procedure to delete a shared space in the Amazon SageMaker AI domain from the SageMaker AI console. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1.  From the list of domains, select the domain that you want to create a shared space for. 

1.  On the **domain details** page, choose the **Space management** tab. 

1.  Select the shared space that you want to delete. The shared space must not contain any non-failed apps. 

1.  Choose **Delete**. This opens a new window. 

1.  Choose **Yes, delete space**. 

1.  Enter *delete* in the field. 

1.  Choose **Delete space**. 

## AWS CLI
<a name="domain-space-delete-cli"></a>

To delete a shared space from the AWS CLI, run the following command from the terminal of your local machine.

```
aws --region region \
sagemaker delete-space \
--domain-id domain-id \
--space-name space-name
```

# Trusted identity propagation with Studio
<a name="trustedidentitypropagation"></a>

Trusted identity propagation is an AWS IAM Identity Center feature that administrators of connected AWS services can use to grant and audit access to service data. Access to this data is based on user attributes such as group associations. Setting up trusted identity propagation requires collaboration between the administrators of connected AWS services and the IAM Identity Center administrator. For more information, see [Prerequisites and considerations](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-overall-prerequisites.html).

The Amazon SageMaker Studio and IAM Identity Center administrators can collaborate to connect the services for trusted identity propagation. Trusted identity propagation addresses enterprise authentication needs across AWS services by simplifying:
+ Enhanced auditing that tracks actions to specific users
+ Access management for data science and machine learning workloads through integration with compatible AWS services
+ Compliance requirements in regulated industries

Studio supports trusted identity propagation for audit purposes and access control with connected AWS services. Trusted identity propagation in Studio does not directly handle authentication or authorization decisions within Studio itself. Instead, it propagates identity context information to compatible services that can use this information for access control.

When you use trusted identity propagation with Studio, your IAM Identity Center identity propagates to connected AWS services, creating more granular permissions and security governance.

**Topics**
+ [Trusted identity propagation architecture and compatibility](trustedidentitypropagation-compatibility.md)
+ [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md)
+ [Monitoring and auditing with CloudTrail](trustedidentitypropagation-auditing.md)
+ [User background sessions](trustedidentitypropagation-user-background-sessions.md)
+ [How to connect with other AWS services with trusted identity propagation enabled](trustedidentitypropagation-connect-other.md)

# Trusted identity propagation architecture and compatibility
<a name="trustedidentitypropagation-compatibility"></a>

Trusted identity propagation integrates AWS IAM Identity Center with Amazon SageMaker Studio and other connected AWS services to propagate users' identity context across services. The following page summarizes the trusted identity propagation architecture and compatibility with SageMaker AI. For a comprehensive overview of how trusted identity propagation works across AWS, see [Trusted identity propagation overview](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-overview.html).

The key components of the trusted identity propagation architecture include:
+ **Trusted identity propagation**: A methodology of propagating user's identity context between applications and services
+ **Identity context**: Information about a user
+ **Identity-enhanced IAM role session**: Identity-enhanced role sessions have an added identity context that carries a user identifier to the AWS service that it calls
+ **Connected AWS services**: Other AWS services that can recognize the identity context that is propagated through trusted identity propagation

Trusted identity propagation allows connected AWS services to make access decisions based on a user's identity. Within Studio itself, IAM roles are used as carriers of the identity context rather than for making access control decisions. The identity context is propagated to connected AWS services where it can be used for both access control and audit purposes. See [trusted identity propagation considerations](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-overall-prerequisites.html#trustedidentitypropagation-considerations) for more information.

When you enable trusted identity propagation with Studio and authenticate through IAM Identity Center, SageMaker AI:
+ Captures the user's identity context from the IAM Identity Center
+ Creates an identity-enhanced IAM role session that include the user's identity context
+ Passes identity-enhanced IAM role session to compatible AWS services when the user accesses resources
+ Enables downstream AWS services to make access decisions and log activities based on the user identity

## Compatible SageMaker AI features
<a name="trustedidentitypropagation-compatibility-compatible-features"></a>

Trusted identity propagation works with the following Studio features:
+ [Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-launch.html) private spaces (JupyterLab and Code Editor, based on Code-OSS, Visual Studio Code - Open Source)

**Note**  
When Studio launches with trusted identity propagation enabled, it uses your identity context in addition to your execution role permissions. However, the following processes during instance setup will only use the execution role permissions, without the identity context: Lifecycle Configuration, Bring-Your-Own-Image, CloudWatch agent for user log forwarding.
[Remote access](https://docs.aws.amazon.com/sagemaker/latest/dg/remote-access.html) is not currently supported with trusted identity propagation.
When you use assume role operations within Studio notebooks, the assumed roles don't propagate trusted identity propagation context. Only the original execution role maintains the identity context.
+  [SageMaker Training](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html) 
+  [SageMaker Processing](https://docs.aws.amazon.com/sagemaker/latest/dg/processing-job.html) 
+  [SageMaker AI realtime hosting](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-options.html) 
+  [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-overview.html) 
+  [SageMaker real-time inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) 
+  [SageMaker Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) 
+  [Managed MLflow](https://docs.aws.amazon.com/sagemaker/latest/dg/mlflow.html) 

## Compatible AWS services
<a name="trustedidentitypropagation-compatibility-compatible-services"></a>

Trusted identity propagation for Amazon SageMaker Studio integrates with compatible AWS services, where trusted identity propagation is enabled. See [use cases](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-integrations.html) for a comprehensive list with examples on how to enable trusted identity propagation. The trusted identity propagation compatible services include the following.
+  [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/workgroups-identity-center.html) 
+  [Amazon EMR on EC2](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-idc-start.html) 
+  [EMR Serverless](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/security-iam-service-trusted-prop.html) 
+  [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/identity-center-integration.html) 
+  [Amazon Redshift Data API](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api-trusted-identity-propagation.html) 
+ Amazon S3 (via [Amazon S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-get-started.html))
+ [AWS Glue Connections](https://docs.aws.amazon.com/glue/latest/dg/security-trusted-identity-propagation.html)

When trusted identity propagation is enabled with SageMaker AI, each other AWS service with trusted identity propagation is enabled is connected. Once they are connected they recognize and use the user's identity context for access control and auditing.

## Supported AWS Regions
<a name="trustedidentitypropagation-compatibility-supported-regions"></a>

Studio supports trusted identity propagation where [IAM Identity Center is supported](https://docs.aws.amazon.com/singlesignon/latest/userguide/regions.html) and Studio with IAM Identity Center authentication is supported. Studio supports trusted identity propagation in the following AWS Regions:
+ af-south-1
+ ap-east-1
+ ap-northeast-1
+ ap-northeast-2
+ ap-northeast-3
+ ap-south-1
+ ap-southeast-1
+ ap-southeast-2
+ ap-southeast-3
+ ca-central-1
+ eu-central-1
+ eu-central-2
+ eu-north-1
+ eu-south-1
+ eu-west-1
+ eu-west-2
+ eu-west-3
+ il-central-1
+ me-south-1
+ sa-east-1
+ us-east-1
+ us-east-2
+ us-west-1
+ us-west-2

# Setting up trusted identity propagation for Studio
<a name="trustedidentitypropagation-setup"></a>

Setting up trusted identity propagation for Amazon SageMaker Studio requires your Amazon SageMaker AI domain to have IAM Identity Center authentication method configured. This section guides you through the prerequisites and steps needed to enable and configure trusted identity propagation for your Studio users.

**Topics**
+ [Prerequisites](#trustedidentitypropagation-setup-prerequisites)
+ [Enable trusted identity propagation for your Amazon SageMaker AI domain](#trustedidentitypropagation-setup-enable)
+ [Configure your SageMaker AI execution role](#trustedidentitypropagation-setup-permissions)

## Prerequisites
<a name="trustedidentitypropagation-setup-prerequisites"></a>

Before setting up trusted identity propagation for SageMaker AI, set up your IAM Identity Center using the following instructions.

**Note**  
Ensure that your IAM Identity Center and domain are in the same region.
+  [IAM Identity Center trusted identity propagation prerequisites](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-overall-prerequisites.html#trustedidentitypropagation-prerequisites) 
+  [Set up IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) 
+  [Add users to your IAM Identity Center directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) 

## Enable trusted identity propagation for your Amazon SageMaker AI domain
<a name="trustedidentitypropagation-setup-enable"></a>

**Important**  
You can only enable trusted identity propagation for domains with AWS IAM Identity Center authentication method configured.
Your IAM Identity Center and Amazon SageMaker AI domain must be in the same AWS Region.

Use one of the following options to learn how to enable trusted identity propagation for a new or existing domain.

------
#### [ New domain - console ]

**Enable trusted identity propagation for a new domain using the SageMaker AI console**

1. Open the [Amazon SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Navigate to **Domains**.

1. [Create a custom domain](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-custom.html). The domain must have the **AWS IAM Identity Center** authentication method configured.

1. In the **Trusted identity propagation** section, choose to **Enable the trusted identity propagation for all users on this domain**.

1. Complete the custom creation process.

------
#### [ Existing domain - console ]

**Enable trusted identity propagation for an existing domain using the SageMaker AI console**
**Note**  
For trusted identity propagation to work properly after it is enabled for an existing domain, users will need to restart their existing IAM Identity Center sessions. To do so, either:  
Users will need to log out and log back in to their existing IAM Identity Center sessions
Administrators can [end active sessions for their workforce users](https://docs.aws.amazon.com/singlesignon/latest/userguide/end-active-sessions.html).

1. Open the [Amazon SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. Navigate to **Domains**.

1. Select your existing domain. The domain must have the **AWS IAM Identity Center** authentication method configured.

1. In the **Domain settings** tab, choose **Edit** in the **Authentication and permissions** section.

1. Choose to **Enable the trusted identity propagation for all users on this domain**.

1. Complete the domain configuration.

------
#### [ Existing domain - AWS CLI ]

Enable trusted identity propagation for an existing domain using the AWS CLI

**Note**  
For trusted identity propagation to work properly after it is enabled for an existing domain, users will need to restart their existing IAM Identity Center sessions. To do so, either:  
Users will need to log out and log back in to their existing IAM Identity Center sessions
Administrators can [end active sessions for their workforce users](https://docs.aws.amazon.com/singlesignon/latest/userguide/end-active-sessions.html).

```
aws sagemaker update-domain \
    --region $REGION \
    --domain-id $DOMAIN_ID \
    --domain-settings "TrustedIdentityPropagationSettings={Status=ENABLED}"
```
+ `DOMAIN_ID` is the Amazon SageMaker AI domain ID. See [View domains](https://docs.aws.amazon.com/sagemaker/latest/dg/domain-view.html) for more information.
+ `REGION` is the AWS Region of your Amazon SageMaker AI domain. You can find this at the top right of any AWS console page.

------

## Configure your SageMaker AI execution role
<a name="trustedidentitypropagation-setup-permissions"></a>

To enable trusted identity propagation for your Studio users, all trusted identity propagation roles need the set the following context permissions. Update the trust policy for all roles to include the `sts:AssumeRole` and `sts:SetContext` actions. Use the following policy when you [update your role trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-trust-policy.html).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "sagemaker.amazonaws.com"
                ]
            },
            "Action": [
                "sts:AssumeRole",
                "sts:SetContext"
            ]
        }
    ]
}
```

------

# Monitoring and auditing with CloudTrail
<a name="trustedidentitypropagation-auditing"></a>

With trusted identity propagation enabled, AWS CloudTrail logs include the identity information of the specific user who performed an action, rather than just the IAM role. This provides enhanced auditing capabilities for compliance and security.

To view identity information in CloudTrail logs:
+ Open the [CloudTrail console](https://console.aws.amazon.com/cloudtrail).
+ Choose **Event history** from the left navigation pane.
+ Choose events from SageMaker AI and related services.
+ Under the **Event record** find `onBehalfOf` key. This contains the `userId` key and other user identification information that can be mapped to a specific IAM Identity Center user.

  See [CloudTrail use cases for IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/sso-cloudtrail-use-cases.html) for more information.

# User background sessions
<a name="trustedidentitypropagation-user-background-sessions"></a>

User background sessions continue even when the user is no longer active. These allow for long-running jobs that can continue even after the user has logged off. This can be enabled through SageMaker AI's trusted identity propagation. The following page explains the configuration options and behaviors for user background sessions.

**Note**  
Existing active user sessions are not impacted when trusted identity propagation is enabled. The default duration applies only to new user sessions or restarted sessions.
User background sessions apply to any long-running SageMaker AI workflows or jobs with persistent states. This includes, but is not limited to, any SageMaker AI resources that maintain execution status or require ongoing monitoring. For example, SageMaker Training, Processing, and Pipelines execution jobs.

**Topics**
+ [Configure user background session](#configure-user-background-sessions)
+ [Default user background session duration](#default-user-background-session-duration)
+ [Impact of disabling trusted identity propagation in Studio](#user-background-session-impact-disable-trustedidentitypropagation-studio)
+ [Impact of disabling user background sessions in the IAM Identity Center console](#user-background-session-impact-disable-trustedidentitypropagation-identity-center)
+ [Runtime considerations](#user-background-session-runtime-considerations)

## Configure user background session
<a name="configure-user-background-sessions"></a>

Once trusted identity propagation for Amazon SageMaker Studio is enabled, default duration limits can be configured through the [user background sessions in the IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/user-background-sessions.html).

## Default user background session duration
<a name="default-user-background-session-duration"></a>

By default, all user background sessions have a duration limit of 7 days. Administrators can [modify this duration in the IAM Identity Center console](https://docs.aws.amazon.com/singlesignon/latest/userguide/user-background-sessions.html). This setting applies at the IAM Identity Center instance level, affecting all supported IAM Identity Center applications and Studio domains within that instance.

When trusted identity propagation is enabled, administrators in the SageMaker AI console will find a banner with the following information:
+ The duration limit for user background sessions
+ A link to the IAM Identity Center console where administrators can change this configuration
  + The duration can be set to any value from 15 minutes up to 90 days

An error message will occur when a user background session has expired. You can use the link to the IAM Identity Center console to update the duration.

## Impact of disabling trusted identity propagation in Studio
<a name="user-background-session-impact-disable-trustedidentitypropagation-studio"></a>

If an administrator disables trusted identity propagation, after initially enabling it, in the SageMaker AI console:
+ Existing jobs continue to run without interruption when user background sessions are enabled.
+ When user background sessions are disabled, any long-running SageMaker AI workflows or jobs with persistent states will switch to using interactive sessions. This includes, but is not limited to, any SageMaker AI resources that maintain execution status or require ongoing monitoring. For example, Amazon SageMaker Training and Processing jobs.
+ Users can restart expired jobs from checkpoints.
+ New jobs run with IAM role credentials and do not propagate the identity context.

## Impact of disabling user background sessions in the IAM Identity Center console
<a name="user-background-session-impact-disable-trustedidentitypropagation-identity-center"></a>

When the user background session is **disabled** for the IAM Identity Center instance, the SageMaker AI job uses user interactive sessions. When using interactive sessions, a SageMaker AI job will fail within 15 minutes when:
+ The user logs out
+ The interactive session is revoked by the administrator

When the user background session is **enabled** for the IAM Identity Center instance, the SageMaker AI job uses user background sessions. When using interactive sessions, a SageMaker AI job will fail within 15 minutes when:
+ The user background session expires
+ The user background session is manually revoked by an administrator

The following provides example behavior with SageMaker Training jobs. When an administrator enables trusted identity propagation but disables [user background sessions](https://docs.aws.amazon.com/singlesignon/latest/userguide/user-background-sessions.html) in the IAM Identity Center console:
+ If a user stays logged in, their Training jobs created while background sessions are disabled fallback to the interactive session.
+ If the user logs off, the session expires and Training jobs depending on the interactive session will fail.
+ Users can restart their Training job from the last checkpoint. The session duration is determined by what is set for the interactive session duration in the IAM Identity Center console.
+ If a user disables background sessions **after** starting a job, the job will continue to use its existing background sessions. In other words, SageMaker AI will not create any new background sessions.

The same behavior applies if background sessions are enabled at the IAM Identity Center instance level but disabled specifically for the Studio application using [IAM Identity Center APIs](https://docs.aws.amazon.com/singlesignon/latest/APIReference/welcome.html).

## Runtime considerations
<a name="user-background-session-runtime-considerations"></a>

When an administrator sets `MaxRuntimeInSeconds` for long-running Training or Processing jobs that is lower than the user background session duration, SageMaker AI runs the job for the minimum of either `MaxRuntimeInSeconds` or user background session duration. For more information about `MaxRuntimeInSeconds`, see [CreateTrainingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html#sagemaker-CreateTrainingJob-request-StoppingCondition). See [user background sessions in the IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/user-background-sessions.html) for information on how to set the runtime.

# How to connect with other AWS services with trusted identity propagation enabled
<a name="trustedidentitypropagation-connect-other"></a>

When trusted identity propagation is enabled for your Amazon SageMaker AI domain, the domain users can connect to other trusted identity propagation enabled AWS services. When trusted identity propagation is enabled, your identity context is automatically propagated to compatible services, allowing for fine-grained access control and improved auditing across your machine learning workflows. This integration eliminates the need for complex IAM role switching and provides a unified identity experience across AWS services. The following pages provide information on how to connect Amazon SageMaker Studio to other AWS services when trusted identity propagation is enabled.

**Topics**
+ [Connect Studio JupyterLab notebooks to Amazon S3 Access Grants with trusted identity propagation enabled](trustedidentitypropagation-s3-access-grants.md)
+ [Connect Studio JupyterLab notebooks to Amazon EMR with trusted identity propagation enabled](trustedidentitypropagation-emr-ec2.md)
+ [Connect your Studio JupyterLab notebooks to EMR Serverless with trusted identity propagation enabled](trustedidentitypropagation-emr-serverless.md)
+ [Connect Studio JupyterLab notebooks to Redshift Data API with trusted identity propagation enabled](trustedidentitypropagation-redshift-data-apis.md)
+ [Connect Studio JupyterLab notebooks to Lake Formation and Athena with trusted identity propagation enabled](trustedidentitypropagation-lake-formation-athena.md)

# Connect Studio JupyterLab notebooks to Amazon S3 Access Grants with trusted identity propagation enabled
<a name="trustedidentitypropagation-s3-access-grants"></a>

You can use [Amazon S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) to flexibly grant identity-based fine-grain access control to Amazon S3 locations. These grant Amazon S3 buckets access directly to your corporate users and groups. The following pages provides information and instructions on how to use Amazon S3 Access Grants with trusted identity propagation for SageMaker AI.

## Prerequisites
<a name="s3-access-grants-prerequisites"></a>

To connect Studio to Lake Formation and Athena with trusted identity propagation enabled, ensure you have completed the following prerequisites:
+  [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md) 
+ Follow the [getting started with Amazon S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-get-started.html) to set up Amazon S3 Access Grants for your bucket. See [scaling data access with Amazon S3 Access Grants](https://aws.amazon.com/blogs/storage/scaling-data-access-with-amazon-s3-access-grants/) for more information.
**Note**  
Standard Amazon S3 APIs do not automatically work with Amazon S3 Access Grants. You must explicitly use Amazon S3 Access Grants APIs. See [Managing access with Amazon S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) for more information.

**Topics**
+ [Prerequisites](#s3-access-grants-prerequisites)
+ [Connect Amazon S3 Access Grants with Studio JupyterLab notebooks](s3-access-grants-setup.md)
+ [Connect Studio JupyterLab notebooks to Amazon S3 Access Grants with Training and Processing jobs](trustedidentitypropagation-s3-access-grants-jobs.md)

# Connect Amazon S3 Access Grants with Studio JupyterLab notebooks
<a name="s3-access-grants-setup"></a>

Use the following information to grant Amazon S3 Access Grants in Studio JupyterLab notebooks.

After Amazon S3 Access Grants is set up, [add the following permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) to your domain or user [execution role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-get-execution-role).
+ `us-east-1` is your AWS Region
+ `111122223333` is your AWS account ID
+ `S3-ACCESS-GRANT-ROLE` is your Amazon S3 Access Grant role

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowDataAccessAPI",
            "Effect": "Allow",
            "Action": [
                "s3:GetDataAccess"
            ],
            "Resource": [
                "arn:aws:s3:us-east-1:111122223333:access-grants/default"
            ]
        },
        {
            "Sid": "RequiredForTIP",
            "Effect": "Allow",
            "Action": "sts:SetContext",
            "Resource": "arn:aws:iam::111122223333:role/S3-ACCESS-GRANT-ROLE"
        }
    ]
}
```

------

Ensure that your Amazon S3 Access Grants role's trust policy allows the `sts:SetContext` and `sts:AssumeRole` actions. The following is an example policy for when you [update your role trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-trust-policy.html).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "access-grants.s3.amazonaws.com"
                ]
            },
            "Action": [
                "sts:AssumeRole",
                "sts:SetContext"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333",
                    "aws:SourceArn": "arn:aws:s3:us-east-1:111122223333:access-grants/default"
                }
            }
        }
    ]
}
```

------

## Use Amazon S3 Access Grants to call Amazon S3
<a name="s3-access-grants-python-example"></a>

The following is an example Python script showing how Amazon S3 Access Grants can be used to call Amazon S3. This assumes you have already successfully set up trusted identity propagation with SageMaker AI.

```
import boto3
from botocore.config import Config

def get_access_grant_credentials(account_id: str, target: str, 
                                 permission: str = 'READ'):
    s3control = boto3.client('s3control')
    response = s3control.get_data_access(
        AccountId=account_id,
        Target=target,
        Permission=permission
    )
    return response['Credentials']

def create_s3_client_from_credentials(credentials) -> boto3.client:
    return boto3.client(
        's3',
        aws_access_key_id=credentials['AccessKeyId'],
        aws_secret_access_key=credentials['SecretAccessKey'],
        aws_session_token=credentials['SessionToken']
    )

# Create client
credentials = get_access_grant_credentials('111122223333',
                                        "s3://tip-enabled-bucket/tip-enabled-path/")
s3 = create_s3_client_from_credentials(credentials)

s3.list_objects(Bucket="tip-enabled-bucket", Prefix="tip-enabled-path/")
```

If you use a path to an Amazon S3 bucket where Amazon S3 access grant is not enabled, the call will fail.

For other programming languages, see [Managing access with Amazon S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) for more information.

# Connect Studio JupyterLab notebooks to Amazon S3 Access Grants with Training and Processing jobs
<a name="trustedidentitypropagation-s3-access-grants-jobs"></a>

Use the following information to grant Amazon S3 Access Grants to access data in Amazon SageMaker Training and Processing jobs.

When a user with trusted identity propagation enabled launches a SageMaker Training or Processing job that needs to access Amazon S3 data:
+ SageMaker AI calls Amazon S3 Access Grants to get temporary credentials based on the user's identity
+ If successful, these temporary credentials access the Amazon S3 data
+ If unsuccessful, SageMaker AI falls back to using the IAM role credentials

**Note**  
To enforce that all of the permission are granted through Amazon S3 Access Grants, you will need to remove related Amazon S3 access permission your execution role and attach them to your corresponding [Amazon S3 Access Grant](https://docs.aws.amazon.com/singlesignon/latest/userguide/tip-tutorial-s3.html#tip-tutorial-s3-create-grant).

**Topics**
+ [Considerations](#s3-access-grants-jobs-considerations)
+ [Set up Amazon S3 Access Grants with Training and Processing jobs](#s3-access-grants-jobs-setup)

## Considerations
<a name="s3-access-grants-jobs-considerations"></a>

Amazon S3 Access Grants cannot be used with [Pipe mode](https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest-stream.html) for both SageMaker Training and Processing for Amazon S3 input.

When trusted identity propagation is enabled, you cannot launch a SageMaker Training Job with the following feature
+ Remote Debug
+ Debugger
+ Profiler

When trusted identity propagation is enabled, you cannot launch a Processing job with the following feature
+ DatasetDefinition

## Set up Amazon S3 Access Grants with Training and Processing jobs
<a name="s3-access-grants-jobs-setup"></a>

After Amazon S3 Access Grants is set up, [add the following permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) to your domain or user [execution role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-get-execution-role).
+ `us-east-1` is your AWS Region
+ `111122223333` is your AWS account ID
+ `S3-ACCESS-GRANT-ROLE` is your Amazon S3 Access Grant role

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowDataAccessAPI",
            "Effect": "Allow",
            "Action": [
                "s3:GetDataAccess",
                "s3:GetAccessGrantsInstanceForPrefix"
            ],
            "Resource": [
                "arn:aws:s3:us-east-1:111122223333:access-grants/default"
            ]
        },
        {
            "Sid": "RequiredForIdentificationPropagation",
            "Effect": "Allow",
            "Action": "sts:SetContext",
            "Resource": "arn:aws:iam::111122223333:role/S3-ACCESS-GRANT-ROLE"
        }
    ]
}
```

------

# Connect Studio JupyterLab notebooks to Amazon EMR with trusted identity propagation enabled
<a name="trustedidentitypropagation-emr-ec2"></a>

Connecting Amazon SageMaker Studio JupyterLab notebooks to Amazon EMR clusters enables you to leverage the distributed computing power of Amazon EMR for large-scale data processing and analytics workloads. With trusted identity propagation enabled, your identity context is propagated to Amazon EMR, allowing for fine-grained access control and comprehensive audit trails. The following page provides instructions on how to connect your Studio notebook to Amazon EMR clusters. Once set up, you can use the `Connect to Cluster` option in your Studio notebook.

To connect Studio to Amazon EMR with trusted identity propagation enabled, ensure you have completed the following setups:
+  [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md) 
+  [Getting started with AWS IAM Identity Center integration for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-idc-start.html) 
+  [Enable communications between Studio and Amazon EMR clusters](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-emr-cluster.html) 

 **Connect to the Amazon EMR cluster** 

For a full list of options on how to connect your JupyterLab notebook to Amazon EMR, see [Connect to an Amazon EMR cluster](https://docs.aws.amazon.com/sagemaker/latest/dg/connect-emr-clusters.html).

# Connect your Studio JupyterLab notebooks to EMR Serverless with trusted identity propagation enabled
<a name="trustedidentitypropagation-emr-serverless"></a>

Amazon EMR Serverless provides a serverless option for running Apache Spark and Apache Hive applications without managing clusters. When integrated with trusted identity propagation, EMR Serverless automatically scales compute resources while maintaining your identity context for access control and auditing. This approach eliminates the operational overhead of cluster management while preserving the security benefits of identity-based access control. The following section provides information on how to connect your trusted identity propagation enabled Studio with the EMR Serverless.

To connect Studio to Amazon EMR Serverless with trusted identity propagation enabled, ensure you have completed the following setups:
+  [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md) 
+  [Trusted identity propagation with EMR Serverless](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/security-iam-service-trusted-prop.html) 
+  [Enable communications between Studio and EMR Serverless](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-emr-serverless.html) 

 **Connect to the EMR Serverless application** 

For a full list of options on how to connect your JupyterLab notebook to EMR Serverless, see [Connect to an EMR Serverless application](https://docs.aws.amazon.com/sagemaker/latest/dg/connect-emr-serverless-application.html).

# Connect Studio JupyterLab notebooks to Redshift Data API with trusted identity propagation enabled
<a name="trustedidentitypropagation-redshift-data-apis"></a>

Amazon Redshift Data API enables you to interact with your Amazon Redshift clusters programmatically without managing persistent connections. When combined with trusted identity propagation, the Redshift Data API provides secure, identity-based access to your data warehouse, allowing you to run SQL queries and retrieve results while maintaining full audit trails of user activities. This integration is particularly valuable for data science workflows that require access to structured data stored in Redshift. The following page includes information and instructions on how to connect trusted identity propagation with Amazon SageMaker Studio to Redshift Data API.

To connect Studio to Redshift Data API with trusted identity propagation enabled, ensure you have completed the following setups:
+  [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md) 
+  [Using Redshift Data API with trusted identity propagation](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api-trusted-identity-propagation.html) 
  + Ensure your execution role has relevant permissions for Redshift Data API. See [authorizing access](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api-access.html) for more information.
+  [Simplify access management with Amazon Redshift and AWS Lake Formation for users in an External Identity Provider](https://aws.amazon.com/blogs/big-data/simplify-access-management-with-amazon-redshift-and-aws-lake-formation-for-users-in-an-external-identity-provider/) 

# Connect Studio JupyterLab notebooks to Lake Formation and Athena with trusted identity propagation enabled
<a name="trustedidentitypropagation-lake-formation-athena"></a>

AWS Lake Formation and Amazon Athena work together to provide a comprehensive data lake solution with fine-grained access control and serverless query capabilities. Lake Formation centralizes permissions management for your data lake, while Athena provides interactive query services. When integrated with trusted identity propagation, this combination enables data scientists to access only the data they're authorized to see, with all queries and data access automatically logged for compliance and auditing purposes. The following page provides information and instructions on how to connect trusted identity propagation with Amazon SageMaker Studio to Lake Formation and Athena

To connect Studio to Lake Formation and Athena with trusted identity propagation enabled, ensure you have completed the following setups:
+  [Setting up trusted identity propagation for Studio](trustedidentitypropagation-setup.md) 
+  [Create a Lake Formation role](https://docs.aws.amazon.com/lake-formation/latest/dg/prerequisites-identity-center.html) 
+  [Connect Lake Formation with IAM Identity Center](https://docs.aws.amazon.com/lake-formation/latest/dg/connect-lf-identity-center.html) 
+ Create Lake Formation resources:
  +  [Database](https://docs.aws.amazon.com/lake-formation/latest/dg/creating-database.html) 
  +  [Tables](https://docs.aws.amazon.com/lake-formation/latest/dg/creating-tables.html) 
+  [Create Athena workgroup](https://docs.aws.amazon.com/athena/latest/ug/creating-workgroups.html) 
  + Choose **AthenaSQL** for the engine
  + Choose **IAM Identity Center** for authentication method
  + Create a new service role
    + Ensure that the IAM Identity Center users have access to the query result location using Amazon S3 Access Grants
+  [Granting database permissions using the named resource method](https://docs.aws.amazon.com/lake-formation/latest/dg/granting-database-permissions.html) 

# Perform common UI tasks
<a name="studio-updated-common"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

 The following sections describe how to perform common tasks in the Amazon SageMaker Studio UI. For an overview of the Studio user interface, see [Amazon SageMaker Studio UI overview](studio-updated-ui.md). 

 **Set cookie preferences** 

1. Launch Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1.  At the bottom of the Studio user interface, choose **Cookie Preferences**. 

1.  Select the check box for each type of cookie that you want Amazon SageMaker AI to use. 

1.  Choose **Save preferences**. 

 **Manage notifications** 

Notifications give information about important changes to Studio, updates to applications, and issues to resolve. 

1. Launch Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1.  On the top navigation bar, choose the **Notifications** icon (![\[Logo for Notifications, a cloud service with a stylized bell icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/monarch/notification.png)). 

1.  From the list of notifications, select the notification to get information about it. 

 **Leave feedback** 

 We take your feedback seriously. We encourage you to provide feedback. 

 At the top navigation of Studio, choose **Provide feedback**. 

 **Sign out** 

 Signing out of the Studio UI is different than closing the browser window. Signing out clears session data from the browser and deletes unsaved changes. 

This same behavior also happens when the Studio session times out. This happens after 5 minutes. 

1. Launch Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1. Choose the **User options** icon (![\[User icon with a circular avatar placeholder and a downward-pointing arrow.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/monarch/user-settings.png)). 

1.  Choose **Sign out**. 

1. In the pop-up window, choose **Sign out**. 

# NVMe stores with Amazon SageMaker Studio
<a name="studio-updated-nvme"></a>

Amazon SageMaker Studio applications and their associated notebooks run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Some of the Amazon EC2 instance types, such as the `ml.m5d` instance family, offer non-volatile memory express (NVMe) solid state drives (SSD) instance stores. NVMe instance stores are local ephemeral disk stores that are physically connected to an instance for fast temporary storage. Studio applications support NVMe instance stores for supported instance types. For more information about instance types and their associated NVMe store volumes, see the [Amazon Elastic Compute Cloud Instance Type Details](https://aws.amazon.com/ec2/instance-types/). This topic provides information about accessing and using NVMe instance stores, as well as considerations when using NVMe instance stores with Studio.

## Considerations
<a name="studio-updated-nvme-considerations"></a>

The following considerations apply when using NVMe instance stores with Studio.
+ An NVMe instance store is temporary storage. The data stored on the NVMe store is deleted when the instance is terminated, stopped, or hibernated. When using NVMe stores with Studio applications, the data on the NVMe instance store is lost whenever the application is deleted, restarted, or patched. We recommend that you back up valuable data to persistent storage solutions, such as Amazon Elastic Block Store, Amazon Elastic File System, or Amazon Simple Storage Service. 
+ Studio patches instances periodically to install new security updates. When an instance is patched, the instance is restarted. This restart results in the deletion of data stored in the NVMe instance store. We recommend that you frequently back up necessary data from the NVMe instance store to persistent storage solutions, such as Amazon Elastic Block Store, Amazon Elastic File System, or Amazon Simple Storage Service. 
+ The following Studio applications support using NVMe storage:
  + JupyterLab
  + Code Editor, based on Code-OSS, Visual Studio Code - Open Source
  + KernelGateway

## Access NVMe instance stores
<a name="studio-updated-nvme-access"></a>

When you select an instance type with attached NVMe instance stores to host a Studio application, the NVMe instance store directory is mounted to the application container at the following location:

```
/mnt/sagemaker-nvme
```

If an instance has more than 1 NVMe instance store attached to it, Studio creates a striped logical volume that spans all of the local disks attached. Studio then mounts this striped logical volume to the `/mnt/sagemaker-nvme` directory. As a result, the directory storage size is the sum of all NVMe instance store volume sizes attached to the instance. 

If the `/mnt/sagemaker-nvme` directory does not exist, verify that the instance type hosting your application has an attached NVMe instance store volume.

# Local mode support in Amazon SageMaker Studio
<a name="studio-updated-local"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

Amazon SageMaker Studio applications support the use of local mode to create estimators, processors, and pipelines, then deploy them to a local environment. With local mode, you can test machine learning scripts before running them in Amazon SageMaker AI managed training or hosting environments. Studio supports local mode in the following applications:
+ Amazon SageMaker Studio Classic
+ JupyterLab
+ Code Editor, based on Code-OSS, Visual Studio Code - Open Source

Local mode in Studio applications is invoked using the SageMaker Python SDK. In Studio applications, local mode functions similarly to how it functions in Amazon SageMaker notebook instances, with some differences. With the [Rootless Docker configuration](studio-updated-local-get-started.md#studio-updated-local-rootless) enabled, you can also access additional Docker registries through your VPC configuration, including on-premises repositories, and public registries. For more information about using local mode with the SageMaker Python SDK, see [Local Mode](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode).

**Note**  
Studio applications do not support multi-container jobs in local mode. Local mode jobs are limited to a single instance for training, inference, and processing jobs. When creating a local mode job, the instance count configuration must be `1`. 

## Docker support
<a name="studio-updated-local-docker"></a>

As part of local mode support, Studio applications support limited Docker access capabilities. With this support, users can interact with the Docker API from Jupyter notebooks or the image terminal of the application. Customers can interact with Docker using one of the following:
+ [Docker CLI](https://docs.docker.com/engine/reference/run/)
+ [Docker Compose CLI ](https://docs.docker.com/compose/reference/)
+ Language specific Docker SDK clients

Studio also supports limited Docker access capabilities with the following restrictions:
+ Usage of Docker networks is not supported.
+ Docker [volume](https://docs.docker.com/storage/volumes/) usage is not supported during container run. Only volume bind mount inputs are allowed during container orchestration. The volume bind mount inputs must be located on the Amazon Elastic File System (Amazon EFS) volume for Studio Classic. For JupyterLab and Code Editor applications, it must be located on the Amazon Elastic Block Store (Amazon EBS) volume.
+ Container inspect operations are allowed.
+ Container port to host mapping is not allowed. However, you can specify a port for hosting. The endpoint is then accessible from Studio using the following URL:

  ```
  http://localhost:port
  ```

### Docker operations supported
<a name="studio-updated-local-docker-supported"></a>

The following table lists all of the Docker API endpoints that are supported in Studio, including any support limitations. If an API endpoint is missing from the table, Studio doesn't support it.


|  API Documentation  |  Limitations  | 
| --- | --- | 
|  [SystemAuth](https://docs.docker.com/engine/api/v1.43/#tag/System/operation/SystemAuth)  |   | 
|  [SystemEvents](https://docs.docker.com/engine/api/v1.43/#tag/System/operation/SystemEvents)  |   | 
|  [SystemVersion](https://docs.docker.com/engine/api/v1.43/#tag/System/operation/SystemVersion)  |   | 
|  [SystemPing](https://docs.docker.com/engine/api/v1.43/#tag/System/operation/SystemPing)  |   | 
|  [SystemPingHead](https://docs.docker.com/engine/api/v1.43/#tag/System/operation/SystemPingHead)  |   | 
|  [ContainerCreate](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerCreate)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-local.html)  | 
|  [ContainerStart](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerStart)  |   | 
|  [ContainerStop](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerStop)  |   | 
|  [ContainerKill](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerKill)  |   | 
|  [ContainerDelete](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerDelete)  |   | 
|  [ContainerList](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerList)  |   | 
|  [ContainerLogs](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerLogs)  |   | 
|  [ContainerInspect](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerInspect)  |   | 
|  [ContainerWait](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerWait)  |   | 
|  [ContainerAttach](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerAttach)  |   | 
|  [ContainerPrune](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerPrune)  |   | 
|  [ContainerResize](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerResize)  |   | 
|  [ImageCreate](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageCreate)  |  VPC-only mode support is limited to Amazon ECR images in allowlisted accounts. With the [Rootless Docker configuration](studio-updated-local-get-started.md#studio-updated-local-rootless) enabled, you can also access additional Docker registries through your VPC configuration, including on-premises repositories, and public registries. | 
|  [ImagePrune](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImagePrune)  |   | 
|  [ImagePush](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImagePush)  |  VPC-only mode support is limited to Amazon ECR images in allowlisted accounts. With the [Rootless Docker configuration](studio-updated-local-get-started.md#studio-updated-local-rootless) enabled, you can also access additional Docker registries through your VPC configuration, including on-premises repositories, and public registries. | 
|  [ImageList](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageList)  |   | 
|  [ImageInspect](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageInspect)  |   | 
|  [ImageGet](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageGet)  |   | 
|  [ImageDelete](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageDelete)  |   | 
|  [ImageBuild](https://docs.docker.com/engine/api/v1.43/#tag/Image/operation/ImageBuild)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-local.html)  | 

**Topics**
+ [Docker support](#studio-updated-local-docker)
+ [Getting started with local mode](studio-updated-local-get-started.md)

# Getting started with local mode
<a name="studio-updated-local-get-started"></a>

The following sections outline the steps needed to get started with local mode in Amazon SageMaker Studio, including:
+ Completing prerequisites
+ Setting `EnableDockerAccess`
+ Docker installation

## Prerequisites
<a name="studio-updated-local-prereq"></a>

Complete the following prerequisites to use local mode in Studio applications:
+ To pull images from an Amazon Elastic Container Registry repository, the account hosting the Amazon ECR image must provide access permission for the user’s execution role. The domain’s execution role must also allow Amazon ECR access.
+ Verify that you are using the latest version of the Studio Python SDK by using the following command: 

  ```
  pip install -U sagemaker
  ```
+ To use local mode and Docker capabilities, set the following parameter of the domain’s `DockerSettings` using the AWS Command Line Interface (AWS CLI): 

  ```
  EnableDockerAccess : ENABLED
  ```
+ Using `EnableDockerAccess`, you can also control whether users in the domain can use local mode. By default, local mode and Docker capabilities aren't allowed in Studio applications. For more information, see [Setting `EnableDockerAccess`](#studio-updated-local-enable).
+ Install the Docker CLI in the Studio application by following the steps in [Docker installation](#studio-updated-local-docker-installation).
+ For the [Rootless Docker configuration](#studio-updated-local-rootless), ensure your VPC has appropriate endpoints and routing configured for your desired Docker registries.

## Setting `EnableDockerAccess`
<a name="studio-updated-local-enable"></a>

The following sections show how to set `EnableDockerAccess` when the domain has public internet access or is in `VPC-only` mode.

**Note**  
Changes to `EnableDockerAccess` only apply to applications created after the domain is updated. You must create a new application after updating the domain.

**Public internet access**

The following example commands show how to set `EnableDockerAccess` when creating a new domain or updating an existing domain with public internet access:

```
# create new domain
aws --region region \
    sagemaker create-domain --domain-name domain-name \
    --vpc-id vpc-id \
    --subnet-ids subnet-ids \
    --auth-mode IAM \
    --default-user-settings "ExecutionRole=execution-role" \
    --domain-settings '{"DockerSettings": {"EnableDockerAccess": "ENABLED"}}' \
    --query DomainArn \
    --output text

# update domain
aws --region region \
    sagemaker update-domain --domain-id domain-id \
    --domain-settings-for-update '{"DockerSettings": {"EnableDockerAccess": "ENABLED"}}'
```

**`VPC-only` mode**

When using a domain in `VPC-only` mode, Docker image push and pull requests are routed through the service VPC instead of the VPC configured by the customer. Because of this functionality, administrators can configure a list of trusted AWS accounts that users can make Amazon ECR Docker pull and push operations requests to.

If a Docker image push or pull request is made to an AWS account that is not in the list of trusted AWS accounts, the request fails. Docker pull and push operations outside of Amazon Elastic Container Registry (Amazon ECR) aren't supported in `VPC-only` mode.

The following AWS accounts are trusted by default:
+ The account hosting the SageMaker AI domain.
+ SageMaker AI accounts that host the following SageMaker images:
  + DLC framework images
  + Sklearn, Spark, XGBoost processing images

To configure a list of additional trusted AWS accounts, specify the `VpcOnlyTrustedAccounts` value as follows:

```
aws --region region \
    sagemaker update-domain --domain-id domain-id \
    --domain-settings-for-update '{"DockerSettings": {"EnableDockerAccess": "ENABLED", "VpcOnlyTrustedAccounts": ["account-list"]}}'
```

**Note**  
When the [Rootless Docker configuration](#studio-updated-local-rootless) is enabled, `VpcOnlyTrustedAccounts` is ignored and Docker traffic routes through your VPC configuration, allowing access to any registry your VPC can reach.

## Rootless Docker configuration
<a name="studio-updated-local-rootless"></a>

When [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DockerSettings.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DockerSettings.html) is enabled, Studio uses a [rootless Docker daemon](https://docs.docker.com/engine/security/rootless/) that routes traffic through your VPC. This provides enhanced security and allows access to additional Docker registries. The key differences with `RootlessDocker` are:
+ Your VPC configuration determines which registries are accessible for Docker operations. `VpcOnlyTrustedAccounts` is ignored and Docker traffic routes through your VPC configuration.

To use rootless Docker, you will need to set both `EnableDockerAccess` and `RootlessDocker` to `ENABLED` for your `DockerSettings`. For example, in the [Setting `EnableDockerAccess`](#studio-updated-local-enable) examples above, you can modify your domain settings to include:

```
'{"DockerSettings": {"EnableDockerAccess": "ENABLED", "RootlessDocker": "ENABLED"}}'
```

## Docker installation
<a name="studio-updated-local-docker-installation"></a>

To use Docker, you must manually install Docker from the terminal of your Studio application. The steps to install Docker are different if the domain has access to the internet or not.

### Internet access
<a name="studio-updated-local-docker-installation-internet"></a>

If the domain is created with public internet access or in `VPC-only` mode with limited internet access, use the following steps to install Docker.

1. (Optional) If your domain is created in `VPC-only` mode with limited internet access, create a public NAT gateway with access to the Docker website. For instructions, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).

1. Navigate to the terminal of the Studio application that you want to install Docker in.

1. To return the operating system of the application, run the following command from the terminal:

   ```
   cat /etc/os-release
   ```

1. Install Docker following the instructions for the operating system of the application in the [Amazon SageMaker AI Local Mode Examples repository](https://github.com/aws-samples/amazon-sagemaker-local-mode/tree/main/sagemaker_studio_docker_cli_install).

   For example, install Docker on Ubuntu following the script at [https://github.com/aws-samples/amazon-sagemaker-local-mode/blob/main/sagemaker\$1studio\$1docker\$1cli\$1install/sagemaker-ubuntu-focal-docker-cli-install.sh](https://github.com/aws-samples/amazon-sagemaker-local-mode/blob/main/sagemaker_studio_docker_cli_install/sagemaker-ubuntu-focal-docker-cli-install.sh) with the following considerations:
   + If chained commands fail, run commands one at a time.
   + Studio only supports Docker version `20.10.X.` and Docker Engine API version `1.41`.
   + The following packages aren't required to use the Docker CLI in Studio and their installation can be skipped:
     + `containerd.io`
     + `docker-ce`
     + `docker-buildx-plugin`
**Note**  
You do not need to start the Docker service in your applications. The instance that hosts the Studio application runs Docker service by default. All Docker API calls are routed through the Docker service automatically.

1. Use the exposed Docker socket for Docker interactions within Studio applications. By default, the following socket is exposed:

   ```
   unix:///docker/proxy.sock
   ```

   The following Studio application environmental variable for the default `USER` uses this exposed socket:

   ```
   DOCKER_HOST
   ```

### No internet access
<a name="studio-updated-local-docker-installation-no-internet"></a>

If the domain is created in `VPC-only` mode with no internet access, use the following steps to install Docker.

1. Navigate to the terminal of the Studio application that you want to install Docker in.

1. Run the following command from the terminal to return the operating system of the application:

   ```
   cat /etc/os-release
   ```

1. Download the required Docker `.deb` files to your local machine. For instructions about downloading the required files for the operating system of the Studio application, see [Install Docker Engine](https://docs.docker.com/engine/install/).

   For example, install Docker from a package on Ubuntu following the steps 1–4 in [Install from a package](https://docs.docker.com/engine/install/ubuntu/#install-from-a-package) with the following considerations:
   + Install Docker from a package. Using other methods to install Docker will fail.
   + Install the latest packages corresponding to Docker version `20.10.X`.
   + The following packages aren't required to use the Docker CLI in Studio. You don't need to install the following:
     + `containerd.io`
     + `docker-ce`
     + `docker-buildx-plugin`
**Note**  
You do not need to start the Docker service in your applications. The instance that hosts the Studio application runs Docker service by default. All Docker API calls are routed through the Docker service automatically.

1. Upload the `.deb` files to the Amazon EFS file system or to the Amazon EBS file system of the application.

1. Manually install the `docker-ce-cli` and `docker-compose-plugin` `.deb` packages from the Studio application terminal. For more information and instructions, see step 5 in [Install from a package](https://docs.docker.com/engine/install/ubuntu/#install-from-a-package) on the Docker docs website.

1. Use the exposed Docker socket for Docker interactions within Studio applications. By default, the following socket is exposed:

   ```
   unix:///docker/proxy.sock
   ```

   The following Studio application environmental variable for the default `USER` uses this exposed socket:

   ```
   DOCKER_HOST
   ```

# View your Studio running instances, applications, and spaces
<a name="studio-updated-running"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

The following topics include information and instructions about how to view your Studio running instances, applications, and spaces. For more information about Studio spaces, see [Amazon SageMaker Studio spaces](studio-updated-spaces.md).

## View your Studio running instances and applications
<a name="studio-updated-running-view-app"></a>

The **Running instances** page gives information about all running application instances that were created in Amazon SageMaker Studio by the user, or were shared with the user. 

You can view and stop running instances for all of your applications and spaces. If an instance is stopped, it does not appear on this page. Stopped instances can be viewed from the landing page for their respective application types. 

You can view a list of running applications and their details in Studio.

**To view running instances**

1. Launch Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1. On the left navigation pane, choose **Running instances**. 

1. From the **Running instances** page, you can view a list of running applications and details about those applications. 

   To view non-running instances, from the left navigation pane choose, the relevant application under **Applications**. The non-running applications will have the **Stopped** status under the **Status** column.

## View your Studio spaces
<a name="studio-updated-running-view-space"></a>

The **Spaces** section within your **Domain details** page gives information about Studio spaces within your domain. You can view, create, and delete spaces on this page. 

The spaces that you can view in the **Spaces** section are running spaces for the following:
+ JupyterLab private space. For information about JupyterLab, see [SageMaker JupyterLab](studio-updated-jl.md).
+ Code Editor private space. For information about Code Editor, based on Code-OSS, Visual Studio Code - Open Source, see [Code Editor in Amazon SageMaker Studio](code-editor.md).
+ Studio Classic shared space. For information about Studio Classic shared space, see [Collaboration with shared spaces](domain-space.md).

There are no spaces for SageMaker Canvas, Studio Classic (private), or RStudio. 

**View your Studio spaces in a domain**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane, expand **Admin configurations** and choose **Domains**.

1. Choose the domain where you want to view the spaces.

1. On the **Domain details** page, choose the **Space management** tab to open the **Spaces** section.

**View your Studio spaces using the AWS CLI**

Use the following command to list all spaces in your domain:

```
aws sagemaker list-spaces --region us-east-1 --domain-id domain-id
```
+ `us-east-1` is your AWS Region.
+ *domain-id* is your domain ID. See [View domains](domain-view.md) for more information.

# Stop and delete your Studio running applications and spaces
<a name="studio-updated-running-stop"></a>

The following page includes information and instructions on how to stop and delete unused Amazon SageMaker Studio resources to avoid unwanted additional costs. For the Studio resources you no longer you wish to use, you will need to both:
+ Stop the application: This stops both the application and deletes the instance that the application is running on. Once you stop an application you can start it back up again.
+ Delete the space: This deletes the Amazon EBS volume that was created for the application and instance.
**Important**  
If you delete the space, you will lose access to the data within that space. Do not delete the space unless you're sure that you want to.

For more information about the differences between Studio spaces and applications, see [View your Studio running instances, applications, and spaces](studio-updated-running.md).

**Topics**
+ [Stop your Amazon SageMaker Studio application](#studio-updated-running-stop-app)
+ [Delete a Studio space](#studio-updated-running-stop-space)

## Stop your Amazon SageMaker Studio application
<a name="studio-updated-running-stop-app"></a>

To avoid additional charges from unused running applications, you must stop them. The following includes information on what stopping an application does and how to do it.
+ The following instructions uses the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html) API to stop the application. This also stops the instance that the application is running on.
+ After you stop an application, you can start up the application again later.
  + When you stop an application, the files in the space will persist. You can run the application again and expect to have access to the same files that are stored in the space, as you did before deleting the application.

    
  + When you stop an application, the *metadata* for the application will be deleted within 24 hours. For more information, see the note in the `CreationTime` response element for the [DescribeApp](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeApp.html#sagemaker-DescribeApp-response-CreationTime) API.

**Note**  
If the service detects that an application is unhealthy, it assumes the [AmazonSageMakerNotebooksServiceRolePolicy](security-iam-awsmanpol-notebooks.md#security-iam-awsmanpol-AmazonSageMakerNotebooksServiceRolePolicy) service linked role and deletes the application using the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html) API.

The following tabs provide instructions to stop an application from your domain using the Studio UI, the SageMaker AI console, or the AWS CLI.

**Note**  
To view and stop all of your Studio running instances in one location, we recommend the [Stop applications using the Studio UI](#studio-updated-running-stop-app-using-studio-updated-ui) workflow from the following options.

### Stop applications using the Studio UI
<a name="studio-updated-running-stop-app-using-studio-updated-ui"></a>

To stop your Studio applications using the Studio UI, use the following instructions.

**To delete your applications (Studio UI)**

1. Launch Studio. This process may differ depending on your setup. For information about launching Studio, see [Launch Amazon SageMaker Studio](studio-updated-launch.md). 

1. From the left navigation pane, choose **Running instances**. 

   If the table on the page is empty, you don't have any running instances or applications in your spaces.

1. In the table under the **Name** and **Application** columns, find the space name and the application that you want to stop.

1. Choose the corresponding **Stop** button to stop the application.

### Stop applications using the SageMaker AI console
<a name="studio-updated-running-stop-app-using-sagemaker-console"></a>

To view or stop Studio running instances from a centralized location, see [Stop applications using the Studio UI](#studio-updated-running-stop-app-using-studio-updated-ui). Otherwise, use the following instructions.

In the SageMaker AI console, you can only stop the running Studio applications for the spaces that you are able to view in the **Spaces** section of the console. For a list of the viewable spaces, see [View your Studio spaces](studio-updated-running.md#studio-updated-running-view-space).

These steps show how to stop your Studio applications by using the SageMaker AI console.

**To stop your applications (SageMaker AI console)**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

1. Choose the domain that you want to revert.

1. On the **Domain details** page, choose the **Space management** tab.

1. 
**Important**  
In the **Space management** tab, you have the option to delete the space. There is a difference between deleting the space and deleting an application. If you delete the space, you will lose access to the data within that space. Do not delete the space unless you're sure that you want to.

   To stop the application, in the **Space management** tab and under the **Name** column, choose the space for the application.

1. In the **Apps** section and under the **App type** column, search for the app to stop.

1. Under the **Action** column, choose the corresponding **Delete app** button.

1. In the pop-up box, choose **Yes, delete app**. After you do so the delete input field becomes available.

1. Enter **delete** in the delete input field to confirm deletion.

1. Choose **Delete**.

### Stop your domain applications using the AWS CLI
<a name="studio-updated-running-stop-app-using-cli"></a>

To view or stop any of your Studio running instances from a centralized location, see [Stop applications using the Studio UI](#studio-updated-running-stop-app-using-studio-updated-ui). Otherwise, use the following instructions.

The following code examples use the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DeleteApp.html) API to stop an application in an example domain. 

To stop your running **JupyterLab** or **Code Editor** instances, use the following code example:

```
aws sagemaker delete-app \
--domain-id example-domain-id \
--region AWS Region \
--app-name default \
--app-type example-app-type \
--space-name example-space-name
```
+ To obtain your `example-domain-id`, use the following instructions:

**To get `example-domain-id`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, choose the **Domain settings** tab.

  1. Copy the **Domain ID**.
+ To obtain your `AWS Region`, use the following instructions to ensure you are using the correct AWS Region for your domain: 

**To get `AWS Region`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, verify that this is the relevant domain.

  1. Expand the region dropdown list from the top right of the SageMaker AI console, and use the corresponding AWS Region ID to the right of your AWS Region name. For example, `us-west-1`.
+ For `example-app-type`, use the application type that's relevant to the application that you want to stop. For example, replace `example-app-type` with one of the following application types:
  + JupyterLab application type: `JupyterLab`. For information about JupyterLab, see [SageMaker JupyterLab](studio-updated-jl.md).
  + Code Editor application type: `CodeEditor`. For information about Code Editor, based on Code-OSS, Visual Studio Code - Open Source, see [Code Editor in Amazon SageMaker Studio](code-editor.md).
+ To obtain your `example-space-name`, use the following steps: 

**To get `example-space-name`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, choose the **Space management** tab.

  1. Copy the relevant space name.

To stop running instances for **SageMaker Canvas**, **Studio Classic**, or **RStudio**, use the following code example:

```
aws sagemaker delete-app \
--domain-id example-domain-id \
--region AWS Region \
--app-name default \
--app-type example-app-type \
--user-profile example-user-name
```
+ For `example-app-type`, use the application type relevant to the application that you want to stop. For example, replace `example-app-type` with one of the following application types:
  + SageMaker Canvas application type: `Canvas`. For information about SageMaker Canvas, see [Amazon SageMaker Canvas](canvas.md).
  + Studio Classic application type: `JupyterServer`. For information about Studio Classic, see [Amazon SageMaker Studio Classic](studio.md).
  + RStudio application type: `RStudioServerPro`. For information about RStudio, see [RStudio on Amazon SageMaker AI](rstudio.md).
+ To obtain your `example-user-name`, navigate to the **Domain details** page. 
  + Next, choose the **User profiles** tab, and copy the relevant space name.

For alternative instructions to stop your running Studio applications, see: 
+ JupyterLab: [Delete unused resources](studio-updated-jl-admin-guide-clean-up.md).
+ Code Editor: [Shut down Code Editor resources](code-editor-use-log-out.md).
+ SageMaker Canvas: [Logging out of Amazon SageMaker Canvas](canvas-log-out.md).
+ Studio Classic: [Shut Down and Update Amazon SageMaker Studio Classic and Apps](studio-tasks-update.md).
+ RStudio: [Shut down RStudio](rstudio-shutdown.md).

## Delete a Studio space
<a name="studio-updated-running-stop-space"></a>

**Important**  
After you delete your space, you will lose all of the data stored in the space. We recommend that you back up your data before deleting your space.

You will need to have administrator permissions, or at least have permissions to update domain, IAM, and Amazon S3, to delete a Studio space.
+ Spaces are used to manage the storage and resource needs of the relevant application. When you delete a space, the storage volume also deletes. Therefore, you lose access to the files stored on that space. For more information about Studio spaces, see [Amazon SageMaker Studio spaces](studio-updated-spaces.md).

  We recommend that you back up your data if you choose to delete a space.
+ After you delete a space, you can't access that space again.

You can delete the Studio spaces that are viewable in the **Spaces** section of the console. For a list of the viewable spaces, see [View your Studio spaces](studio-updated-running.md#studio-updated-running-view-space). 

There are no spaces for SageMaker Canvas, Studio Classic (private), and RStudio. To stop and delete your SageMaker Canvas, Studio Classic (private), or RStudio applications, see [Stop your Amazon SageMaker Studio application](#studio-updated-running-stop-app).

### Delete a space using the SageMaker AI console
<a name="studio-updated-running-stop-space-using-sagemaker-console"></a>

The **Spaces** section within your **Domain details** page gives information about Studio spaces within your domain. You can view, create, and delete spaces on this page. 

**To view Studio spaces in a domain**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

1. Choose the domain where you want to view the spaces.

1. On the **Domain details**, choose **Space management** to open the **Spaces** section.

1. Select the space to delete.

1. Choose **Delete**. 

1. In the pop-up box titled **Delete space**, you have two options: 
   + If you already shut down all applications in the space, choose **Yes, delete space**.
   + If you still have applications running in the space, choose **Yes, shut down all apps and delete space**.

1. Enter **delete** in the delete input field to confirm deletion.

1. To delete the space, you have two options:
   + If you already shut down all applications in the space, choose **Delete space**.
   + If you still have applications running in the space, choose **Shut down all apps and delete space**.

### Delete a space using the AWS CLI
<a name="studio-updated-running-stop-space-using-cli"></a>

Before you can delete a space using the AWS CLI, you must delete the application associated with it. For information about stopping your Studio applications, see [Stop your Amazon SageMaker Studio application](#studio-updated-running-stop-app).

Use the following AWS CLI command to delete a space within a domain:

```
aws sagemaker delete-space \
--domain-id example-domain-id \
--region AWS Region \
--space-name example-space-name
```
+ To obtain your `example-domain-id`, use the following instructions:

**To get `example-domain-id`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, choose the **Domain settings** tab.

  1. Copy the **Domain ID**.
+ To obtain your `AWS Region`, use the following instructions to ensure you are using the correct AWS Region for your domain: 

**To get `AWS Region`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, verify that this is the relevant domain.

  1. Expand the region dropdown list from the top right of the SageMaker AI console, and use the corresponding AWS Region ID to the right of your AWS Region name. For example, `us-west-1`.
+ To obtain your `example-space-name`, use the following steps: 

**To get `example-space-name`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the relevant domain.

  1. On the **Domain details** page, choose the **Space management** tab.

  1. Copy the relevant space name.

# SageMaker Studio image support policy
<a name="sagemaker-distribution"></a>

**Important**  
Currently, all packages in SageMaker Distribution images are licensed for use with Amazon SageMaker AI and do not require additional commercial licenses. However, this might be subject to change in the future, and we recommend reviewing the licensing terms regularly for any updates.

Amazon SageMaker Distribution is a set of Docker images available on SageMaker Studio that include popular frameworks for machine learning, data science, and visualization.

The images include deep learning frameworks like PyTorch, TensorFlow and Keras; popular Python packages like numpy, scikit-learn and pandas; and IDEs like JupyterLab and Code Editor, based on Code-OSS, Visual Studio Code - Open Source. The distribution contains the latest versions of all these packages such that they are mutually compatible.

This page details the support policy and availability for SageMaker Distribution Images on SageMaker Studio.

## Versioning, release cadence, and support policy
<a name="sm-distribution-versioning"></a>

The table below outlines the release schedule for SageMaker Distribution Image versions and their planned support. AWS provides ongoing functionality and security updates for supported image versions. New minor versions are released for major versions for 12 months, and supported minor versions receive ongoing functionality and security patches. In some cases, an image version may need to be designated end of support earlier than originally planned if (a) security issues cannot be addressed while maintaining semantic versioning guidelines or (b) any of our major dependencies, like Python, reach end-of-life. AWS releases ad-hoc major or minor versions on an as-needed basis.


| Version | Description | Release cadence | Planned support | 
| --- | --- | --- | --- | 
| Major | Amazon SageMaker Distribution's major version releases involve upgrading all of its core dependencies to the latest compatible versions. These major releases may also add or remove packages as part of the update. Major versions are denoted by the first number in the version string, such as 1.0, 2.0, or 3.0. | 6 months | 18 months | 
| Minor | Amazon SageMaker Distribution's minor version releases include upgrading all of its core dependencies to the latest compatible minor versions within the same major version. SageMaker Distribution can add new packages during a minor version release. Minor versions are denoted by the second number in the version string, for example, 1.1, 1.2, or 2.1. | 1 month | 6 months | 
| Patch | Amazon SageMaker Distribution's patch version releases include updating all of its core dependencies to the latest compatible patch versions within the same minor version. SageMaker Distribution does not add or remove any packages during a patch version release. Patch versions are denoted by the third number in the version string, for example, 1.1.1, 1.2.1, or 2.1.3. Since patch versions are generally released for fixing security vulnerabilities, we recommend always upgrading to the newest patch version when they become available. | As necessary for fixing security vulnerabilities | Until new patch version is released | 

Each major version of the Amazon SageMaker Distribution is available for 18 months. During the first 12 months, new minor versions are released monthly. For the remaining 6 months, the existing minor versions continue to be supported.

## Supported image versions
<a name="sagemaker-distribution-supported-packages"></a>

The tables below list the supported SageMaker Distribution image versions, their planned end of support dates, and their availability on SageMaker Studio. For image versions where support ends sooner than the planned end of support date, the versions continue to be available on Studio until the designated availability date. You can continue using the image to launch applications for up to 90 days or until the availability date on Studio, whichever comes first. For more information about such cases, reach out to Support.

You can migrate to a newer supported version as soon as possible to ensure that you receive ongoing functionality and security updates. When choosing an image version in SageMaker Studio, we recommend that you choose a supported image version from the tables below.

### Supported major versions
<a name="sm-distribution-major-versions"></a>

The following table lists the supported SageMaker Distribution major image versions.


| Image version | Last minor version release | Supported until | Description | 
| --- | --- | --- | --- | 
| 1.x.x | Apr 30th, 2025 | Oct 30th, 2025 | SageMaker Distribution major version 1 is built with Python 3.10. | 
| 2.x.x | Aug 25th, 2025 | Feb 25th, 2026 | SageMaker Distribution major version 2 is built with Python 3.11. | 
|  3.x.x  | Mar 29th, 2026 | Sep 29th, 2026 |  SageMaker Distribution major version 3 is built with Python 3.12.  | 

### CPU image minor versions
<a name="sm-distribution-cpu-versions"></a>

The following table lists the supported SageMaker Distribution minor image versions for CPUs.


| Image version | Amazon ECR image URI | Planned end of support date | Availability on Studio until | Release notes | 
| --- | --- | --- | --- | --- | 
| 3.1.x | public.ecr.aws/sagemaker/sagemaker-distribution:3.1-cpu | Nov 19th, 2025 | Nov 19th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v3/v3.1) | 
| 3.0.x  | public.ecr.aws/sagemaker/sagemaker-distribution:3.0-cpu  | Jun 30th, 2025 | Sep 29th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v3/v3.0) | 
| 2.6.x | public.ecr.aws/sagemaker/sagemaker-distribution:2.6-cpu | Jun 30th, 2025 | Oct 28th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v2/v2.6) | 

### GPU image minor versions
<a name="sm-distribution-gpu-versions"></a>

The following table lists the supported SageMaker Distribution minor image versions for GPUs.


| Image version | Amazon ECR image URI | Planned end of support date | Availability on Studio until | Release notes for newest patch | 
| --- | --- | --- | --- | --- | 
| 3.1.x | public.ecr.aws/sagemaker/sagemaker-distribution:3.1-gpu | Nov 19th, 2025 | Nov 19th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v3/v3.1) | 
| 3.0.x | public.ecr.aws/sagemaker/sagemaker-distribution:3.0-gpu | Jun 30th, 2025 | Sep 29th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v3/v3.0) | 
| 2.6.x | public.ecr.aws/sagemaker/sagemaker-distribution:2.6-gpu | Jun 30th, 2025 | Oct 28th, 2025 | [Release notes](https://github.com/aws/sagemaker-distribution/tree/main/build_artifacts/v2/v2.6) | 

### Unsupported images
<a name="sm-distribution-unsupported-images"></a>

The following table lists unsupported SageMaker Distribution image versions.


| Image version | Amazon ECR image URI | End of support date | Availability on Studio until | 
| --- | --- | --- | --- | 
| 2.4.x |  public.ecr.aws/sagemaker/sagemaker-distribution:2.4-cpu public.ecr.aws/sagemaker/sagemaker-distribution:2.4-cpu  | Sep 7th, 2025 | Sep 7th, 2025 | 
| 2.3.x |  public.ecr.aws/sagemaker/sagemaker-distribution:2.3-cpu public.ecr.aws/sagemaker/sagemaker-distribution:2.3-cpu  | July 27th, 2025 | July 27th, 2025 | 
| 2.2.x |  public.ecr.aws/sagemaker/sagemaker-distribution:2.2-cpu public.ecr.aws/sagemaker/sagemaker-distribution:2.2-gpu  | May 15th, 2025 | May 15th, 2025 | 
| 2.1.x |  public.ecr.aws/sagemaker/sagemaker-distribution:2.1-cpu public.ecr.aws/sagemaker/sagemaker-distribution:2.1-gpu  | Apr 25th, 2025 | May 12th, 2025 | 
| 2.0.x |  public.ecr.aws/sagemaker/sagemaker-distribution:2.0-cpu public.ecr.aws/sagemaker/sagemaker-distribution:2.0-gpu  | Feb 25th, 2025 | Apr 21st, 2025 | 
| 1.13.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.13-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.13-gpu  | May 15th, 2025 | Sep 20th, 2025 | 
| 1.12.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.12-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.12-gpu  | July 23rd, 2025 | July 23rd, 2025 | 
| 1.11.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.11-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.11-gpu  | Apr 1st, 2025 | May 12th, 2025 | 
| 1.10.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.10-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.10-gpu  | Feb 5th, 2025 | Apr 10th, 2025 | 
| 1.9.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.9-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.9-gpu  | Jan 15th, 2025 | Apr 10th, 2025 | 
| 1.8.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.8-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.8-gpu  | Dec 31st, 2024 | Apr 10th, 2025 | 
| 1.7.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.7-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.7-gpu  | Dec 15th, 2024 | Apr 10th, 2025 | 
| 1.6.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.6-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.6-gpu  | Dec 15th, 2024 | Apr 10th, 2025 | 
| 1.5.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.5-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.5-gpu  | Oct 31st, 2024 | Nov 1st, 2024 | 
| 1.4.x |  public.ecr.aws/sagemaker/sagemaker-distribution:1.4-cpu public.ecr.aws/sagemaker/sagemaker-distribution:1.4-gpu  | Oct 31st, 2024 | Nov 1st, 2024 | 
| 1.3.x | public.ecr.aws/sagemaker/sagemaker-distribution:1.3-cpu | June 28th, 2024 | Oct 18th, 2024 | 
| 1.2.x | public.ecr.aws/sagemaker/sagemaker-distribution:1.2-cpu | June 28th, 2024 | Oct 18th, 2024 | 

### Frequently asked questions
<a name="sm-distribution-faqs"></a>

**What constitutes a major image version release?**

Major image versions are released every 6 months. A major image version release for Amazon SageMaker Distribution involves upgrading all core dependencies to the latest compatible versions and may include adding or removing packages. Python framework is only upgraded with new major version releases. For example, with major version 2 release, Python framework was upgraded from 3.10 to 3.11, PyTorch was upgraded from 2.0 to 2.3, TensorFlow was upgraded from 2.14 to 2.17, Autogluon was upgraded from 0.8 to 1.1, and 4 packages were added to the image.

**What constitutes a minor image version release?**

Minor image versions are released for all supported major versions monthly. A minor image version release for Amazon SageMaker Distribution involves upgrading all core dependencies except Python and CUDA to the latest compatible minor versions within the same major version and may include adding new packages. For example, with a minor version release, langchain might be upgraded from 0.1 to 0.2 and jupyter-ai from 2.18 to 2.20.

**What constitutes a patch image version release?**

Patch image versions are released as necessary to fix security vulnerabilities. A patch image version release for Amazon SageMaker Distribution involves updating all of its core dependencies to the latest compatible patch versions within the same minor version. SageMaker Distribution does not add or remove any packages during a patch version release. For example, with a patch version release, matplotlib might be upgraded from 3.9.1 to 3.9.2 and boto3 from 1.34.131 to 1.34.162.

**Where can I find the packages available in a specific image version?**

Each image version has a `release.md` file in the [GitHub repository's](https://github.com/aws/sagemaker-distribution) `build_artifacts` folder, showing all packages and package versions for CPU and GPU images. Separate changelog files for CPU and GPU versions detail package upgrades. Changelogs compare the new image version to the previous. For example, version 1.9.0 compares to the latest patch version of 1.8, version 1.9.1 compares to 1.9.0, and version 2.0.0 compares to the latest patch version of the latest minor version available at the time.

**How are images scanned for Common Vulnerabilities and Exposures (CVEs)?**

Amazon SageMaker AI leverages [Amazon Elastic Container Registry (Amazon ECR) enhanced scanning](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html) to automatically detect vulnerabilities and fixes for SageMaker Distribution Images. AWS continuously runs ECR enhanced scanning for the latest patch version of all supported image versions. When vulnerabilities are detected and a fix is available, AWS releases an updated image version to remediate the issue.

**Can I still use older images after an image is no longer supported?**

Images are available on SageMaker Studio until the designated availability date. Older images remain available in ECR after they reach end of support and are removed from Studio. You can download older image versions from ECR and [create a custom SageMaker image](studio-byoi-create.md). However, we highly recommend upgrading to a supported image version that continuously receives security updates and bug fixes. Customers who build their own custom images are responsible for scanning and patching their images. For more information, see the [AWS Shared Responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).

**Important**  
SageMaker Distribution v0.x.y is only used in Studio Classic. SageMaker Distribution v1.x.y is only used in JupyterLab.

# Amazon SageMaker Studio pricing
<a name="studio-updated-cost"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

There is no additional charge for using the Amazon SageMaker Studio UI.  

The following do incur costs:
+ Amazon Elastic Block Store or Amazon Elastic File System volumes that are mounted with your applications.
+ Any jobs and resources that users launch from Studio applications.
+ Launching a JupyterLab application, even if no resources or jobs launched in the application. 

For information about how Amazon SageMaker Studio Classic is billed, see [Amazon SageMaker Studio Classic Pricing](studio-pricing.md). 

For more information about billing along with pricing examples, see [Amazon SageMaker Pricing](https://aws.amazon.com//sagemaker/pricing/). 

# Troubleshooting
<a name="studio-updated-troubleshooting"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

This section shows how to troubleshoot common problems in Amazon SageMaker Studio.

## Recovery mode
<a name="studio-updated-troubleshooting-recovery-mode"></a>

Recovery mode allows you to access your Studio application when a configuration issue prevents your normal start up. It provides a simplified environment with essential functionality to help you diagnose and fix the issue.

When an application fails to launch, you may see an error message about accessing recovery mode to address one of the following configuration issues.
+ Corrupted [https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html](https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html) file.

  For information on troubleshooting your `.condarc` file, see the [troubleshooting](https://docs.conda.io/projects/conda/en/latest/user-guide/troubleshooting.html) page in the *Conda user guide*.
+ Insufficient storage volume available. 

  You can increase the Amazon EBS space storage available for the application or enter recovery mode to remove unnecessary data.

  For information on increasing the Amazon EBS volume size, see [request a quota size](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) in the *Service Quotas Developer Guide*.

In recovery mode:
+ Your home directory will differ from your normal start up. This directory is temporary and ensures that any corrupted configurations in your standard home directory does not impact your recovery mode operations. You can navigate to your standard home directory by using the command `cd /home/sagemaker-user`.
  + Standard mode: `/home/sagemaker-user`
  + Recovery mode: `/tmp/sagemaker-recovery-mode-home`
+ The conda environment uses a minimal base conda environment with essential packages only. The simplified conda setup helps isolate environment-related issues and provides basic functionality for troubleshooting.

You can use the Studio UI or the AWS CLI to access the application in recovery mode.

### Use the Studio UI to access the application in recovery mode
<a name="studio-updated-troubleshooting-recovery-mode-console"></a>

The following provides instructions on accessing your application in recovery mode.

1. If you have not already done so, launch the Studio UI by following the instructions in [Launch from the Amazon SageMaker AI console](studio-updated-launch.md#studio-updated-launch-console).

1. In the left navigation menu, under **Applications**, choose the application.

1. Choose the space you are having configuration issues with.

   The following steps become available to you when you have one one or more of the configuration issues mentioned previously. In this case, you will see a warning banner and **Recovery mode** message. 
**Note**  
The warning banner should have a recommended solution for the issue. Take note of it before proceeding.

1. Choose **Run space (Recovery mode)**. 

1. To access your application in recovery mode, choose **Open *application* (Recovery mode)**.

### Use the AWS CLI to access the application in recovery mode
<a name="studio-updated-troubleshooting-recovery-mode-cli"></a>

To access your application in recovery mode, you must append `--recovery-mode` to your [create-app](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-app.html) AWS CLI command. The following provides an example on how to access your application in recovery mode. 

For the following example, you will need your:
+ *domain-id*

  To obtain your domain details, see [View domains](domain-view.md).
+ *space-name*

  To obtain the space names associated with your domain, see [Use the AWS CLI to view the SageMaker AI spaces in your domain](sm-console-domain-resources-view.md#sm-console-domain-resources-view-spaces-cli).
+ *app-name*

  The name of your application. To view your applications, see [Use the AWS CLI to view the SageMaker AI applications in your domain](sm-console-domain-resources-view.md#sm-console-domain-resources-view-apps-cli).

------
#### [ Access Code Editor application in recovery mode ]

```
aws sagemaker create-app \
    --app-name app-name \
    --app-type CodeEditor \
    --domain-id domain-id \
    --space-name space-name \
    --recovery-mode
```

------
#### [ Access JupyterLab application in recovery mode ]

```
aws sagemaker create-app \
    --app-name app-name \
    --app-type JupyterLab \
    --domain-id domain-id \
    --space-name space-name \
    --recovery-mode
```

------

## Cannot delete the Code Editor or JupyterLab application
<a name="studio-updated-troubleshooting-cannot-delete-application"></a>

This issue occurs when a user creates an application from Amazon SageMaker Studio, that is only available in Studio, then reverts their default experience to Studio Classic. As a result, the user cannot delete an application for Code Editor, based on Code-OSS, Visual Studio Code - Open Source or JupyterLab, because they can't access the Studio UI.

To resolve this issue, notify your administrator so that they can delete the application manually using the AWS Command Line Interface (AWS CLI). 

## EC2InsufficientCapacityError
<a name="studio-updated-troubleshooting-ec2-capacity"></a>

This issue occurs when you try to run a space and AWS does not currently have enough available on-demand capacity to fulfill your request. 

To resolve this issue, complete the following. 
+ Wait a few minutes, then resubmit your request. Capacity can shift frequently.
+ Run the space with an alternate instance size or type.

**Note**  
Capacity is available in different Availability Zones. To maximize capacity availability for users, we recommend setting up subnets in all Availability Zones. Studio retries all available Availability Zones for the domain.   
Instance type availability differs between regions. For a list of supported instances types per Region, see [Amazon SageMaker AI pricing](https://aws.amazon.com/sagemaker/pricing/))

The following table lists instance families and their recommended alternatives.


| Instance family | CPU Type | vCPUs | Memory (GiB) | GPU type | GPUs | GPU Memory (GiB) | Recommended alternative | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| G4dn | 2nd Generation Intel Xeon Scalable Processors | 4 to 96 | 16 to 384 | NVIDIA T4 Tensor Core | 1 to 8 | 16 per GPU | G6 | 
| G5 | 2nd generation AMD EPYC processors | 4 to 192 | 16 to 768 | NVIDIA A10G Tensor core | 1 to 8 | 24 per GPU | G6e | 
| G6 | 3rd generation AMD EPYC processors | 4 to 192 | 16 to 768 | NVIDIA L4 Tensor Core | 1 to 8 | 24 per GPU | G4dn | 
| G6e | 3rd generation AMD EPYC processors | 4 to 192 | 32 to 1536 | NVIDIA L40S Tensor Core | 1 to 8 | 48 per GPU | G5, P4 | 
| P3 | Intel Xeon Scalable Processors | 8 to 96 | 61 to 768 | NVIDIA Tesla V100 | 1 to 8 | 16 per GPU (32 per GPU for P3dn) | G6e, P4 | 
| P4 | 2nd Generation Intel Xeon Scalable processors | 96 | 1152 | NVIDIA A100 Tensor Core | 8 | 320 (640 for P4de) | G6e | 
| P5 | 3rd Gen AMD EPYC processors | 192 | 2000 | NVIDIA H100 Tensor Core | 8 | 640 | P4de | 

## Insufficient limit (quota increase required)
<a name="studio-updated-troubleshooting-insufficient-limit"></a>

This issue occurs when you get the following error message while attempting to run a space. 

```
Error when creating application for space: ... : The account-level service limit is X Apps, with current utilization Y Apps and a request delta of 1 Apps. Please use Service Quotas to request an increase for this quota.
```

There is a default limit on the number of instances, for each instance type, that you can run in each AWS Region. This error means that you have reached that limit. 

To resolve this issue, request an instance limit increase for the AWS Region that you are launching the space in. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html).

## Failure to load custom image
<a name="studio-updated-troubleshooting-custom-image"></a>

This issue occurs when a SageMaker AI image is deleted before detaching the image from your domain. This can be seen when you view the **Environment** tab for your domain.

To resolve this issue, you will need to create a temporary new image with the same name as the deleted one, detach the image, then delete the temporary image. Use the following instructions for a walk through.

1. If you have not already done so, launch the [SageMaker AI console](https://console.aws.amazon.com/sagemaker).

1. In the left navigation menu, under **Admin configurations**, choose **Domains**.

1. Choose your domain.

1. Choose the **Environment** tab. You will see the error message on this page.

1. Copy your image name from the image ARN.

1. In the left navigation menu, under **Admin configurations**, choose **Images**.

1. Choose **Create image**.

1. Follow the steps in the procedure, but ensure that your image name is the same as the image name from above. 

   If you do not have an image in a Amazon ECR directory, see the instructions in [Create a custom image and push to Amazon ECR](studio-updated-byoi-how-to-prepare-image.md).

1. Once you have created your SageMaker AI image, navigate back to your domain **Environment** tab. You will see the image attached to your domain.

1. Select the image and choose **Detach**.

1. Follow the instructions to detach and delete the temporary SageMaker AI image.

# Migration from Amazon SageMaker Studio Classic
<a name="studio-updated-migrate"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

When you open Amazon SageMaker Studio, the web-based UI is based on the chosen default experience. Amazon SageMaker AI currently supports two different default experiences: the Amazon SageMaker Studio experience and the Amazon SageMaker Studio Classic experience. To access the latest Amazon SageMaker Studio features, you must migrate existing domains from the Amazon SageMaker Studio Classic experience. When you migrate your default experience from Studio Classic to Studio, you don't lose any features, and can still access the Studio Classic IDE within Studio. For information about the added benefits of the Studio experience, see [Amazon SageMaker Studio](studio-updated.md).

**Note**  
For existing customers that created their accounts before November 30, 2023, Studio Classic may be the default experience. You can enable Studio as your default experience using the AWS Command Line Interface (AWS CLI) or the Amazon SageMaker AI console. For more information about Studio Classic, see [Amazon SageMaker Studio Classic](studio.md). 
For customers that created their accounts after November 30, 2023, we recommend using Studio as the default experience because it contains various integrated development environments (IDEs), including the Studio Classic IDE, and other new features.  
JupyterLab 3 reached its end of maintenance date on May 15, 2024. After December 31, 2024, you can only create new Studio Classic notebooks on JupyterLab 3 for a limited period. However after December 31, 2024, SageMaker AI will no longer provide fixes for critical issues on Studio Classic notebooks on JupyterLab 3. We recommend that you migrate your workloads to the new Studio experience, which supports JupyterLab 4.
+ If Studio is your default experience, the UI is similar to the images found in [Amazon SageMaker Studio UI overview](studio-updated-ui.md).
+ If Studio Classic is your default experience, the UI is similar to the images found in [Amazon SageMaker Studio Classic UI Overview](studio-ui.md).

To migrate, you must update an existing domain. Migrating an existing domain from Studio Classic to Studio requires three distinct phases:

1. ** Migrate the UI from Studio Classic to Studio**: One time, low lift task that requires creating a test domain to ensure Studio is compliant with your organization's network configurations before migrating the existing domain's UI from Studio Classic to Studio.

1. **(Optional) Migrate custom images and lifecycle configuration scripts**: Medium lift task for migrating your custom images and LCC scripts from Studio Classic to Studio.

1. **(Optional) Migrate data from Studio Classic to Studio**: Heavy lift task that requires using AWS DataSync to migrate data from the Studio Classic Amazon Elastic File System volume to either a target Amazon EFS or Amazon Elastic Block Store volume.

   1. **(Optional) Migrate data flows from Data Wrangler in Studio Classic**: One time, low lift task for migrating your data flows from Data Wrangler in Studio Classic to Studio, which you can then access in the latest version of Studio through SageMaker Canvas. For more information, see [Migrate data flows from Data Wrangler](studio-updated-migrate-data.md#studio-updated-migrate-flows).

 The following topics show how to complete these phases to migrate an existing domain from Studio Classic to Studio.

## Automatic migration
<a name="studio-updated-migrate-auto"></a>

Between July 2024 and August 2024, we are automatically upgrading the default landing experience for users to the new Studio experience. This only changes the default landing UI to the updated Studio UI. The Studio Classic application is still accessible from the new Studio UI.

To ensure that migration works successfully for your users, see [Migrate the UI from Studio Classic to Studio](studio-updated-migrate-ui.md). In particular, ensure the following:
+ the domain's execution role has the right permissions
+ the default landing experience is set to Studio
+ the domain's Amazon VPC, if applicable, is configured to Studio using the Studio VPC endpoint

However, if you need to continue having Studio Classic as your default UI for a limited time, set the landing experience to Studio Classic explicitly. For more information, see [Set Studio Classic as the default experience](studio-updated-migrate-ui.md#studio-updated-migrate-revert).

**Topics**
+ [Automatic migration](#studio-updated-migrate-auto)
+ [Complete prerequisites to migrate the Studio experience](studio-updated-migrate-prereq.md)
+ [Migrate the UI from Studio Classic to Studio](studio-updated-migrate-ui.md)
+ [(Optional) Migrate custom images and lifecycle configurations](studio-updated-migrate-lcc.md)
+ [(Optional) Migrate data from Studio Classic to Studio](studio-updated-migrate-data.md)

# Complete prerequisites to migrate the Studio experience
<a name="studio-updated-migrate-prereq"></a>

Migration of the default experience from Studio Classic to Studio is managed by the administrator of the existing domain. If you do not have permissions to set Studio as the default experience for the existing domain, contact your administrator. To migrate your default experience, you must have administrator permissions or at least have permissions to update the existing domain, AWS Identity and Access Management (IAM), and Amazon Simple Storage Service (Amazon S3). Complete the following prerequisites before migrating an existing domain from Studio Classic to Studio.
+ The AWS Identity and Access Management role used to complete migration must have a policy attached with at least the following permissions. For information about creating an IAM policy, see [Creating IAM policies](https://docs.aws.amazon.com//IAM/latest/UserGuide/access_policies_create.html).
**Note**  
The release of Studio includes updates to the AWS managed policies. For more information, see [SageMaker AI Updates to AWS Managed Policies](security-iam-awsmanpol.md#security-iam-awsmanpol-updates).
  + Phase 1 required permissions:
    + `iam:CreateServiceLinkedRole`
    + `iam:PassRole`
    + `sagemaker:DescribeDomain`
    + `sagemaker:UpdateDomain`
    + `sagemaker:CreateDomain`
    + `sagemaker:CreateUserProfile`
    + `sagemaker:ListApps`
    + `sagemaker:AddTags`
    + `sagemaker:DeleteApp`
    + `sagemaker:DeleteSpace`
    + `sagemaker:UpdateSpace`
    + `sagemaker:DeleteUserProfile`
    + `sagemaker:DeleteDomain`
    + `s3:PutBucketCORS`
  + Phase 2 required permissions (Optional, only if using lifecycle configuration scripts):

    No additional permissions needed. If the existing domain has lifecycle configurations and custom images, the admin will already have the required permissions.
  + Phase 3 using custom Amazon Elastic File System required permissions (Optional, only if transfering data):
    + `efs:CreateFileSystem`
    + `efs:CreateMountTarget`
    + `efs:DescribeFileSystems`
    + `efs:DescribeMountTargets`
    + `efs:DescribeMountTargetSecurityGroups`
    + `efs:ModifyMountTargetSecurityGroups`
    + `ec2:DescribeSubnets`
    + `ec2:DescribeSecurityGroups`
    + `ec2:DescribeNetworkInterfaceAttribute`
    + `ec2:DescribeNetworkInterfaces`
    + `ec2:AuthorizeSecurityGroupEgress`
    + `ec2:AuthorizeSecurityGroupIngress`
    + `ec2:CreateNetworkInterface`
    + `ec2:CreateNetworkInterfacePermission`
    + `ec2:RevokeSecurityGroupIngress`
    + `ec2:RevokeSecurityGroupEgress`
    + `ec2:DeleteSecurityGroup`
    + `datasync:CreateLocationEfs`
    + `datasync:CreateTask`
    + `datasync:StartTaskExecution`
    + `datasync:DeleteTask`
    + `datasync:DeleteLocation`
    + `sagemaker:ListUserProfiles`
    + `sagemaker:DescribeUserProfile`
    + `sagemaker:UpdateDomain`
    + `sagemaker:UpdateUserProfile`
  + Phase 3 using Amazon Simple Storage Service required permissions (Optional, only if transfering data):
    + `iam:CreateRole`
    + `iam:GetRole`
    + `iam:AttachRolePolicy`
    + `iam:DetachRolePolicy`
    + `iam:DeleteRole`
    + `efs:DescribeFileSystems`
    + `efs:DescribeMountTargets`
    + `efs:DescribeMountTargetSecurityGroups`
    + `ec2:DescribeSubnets`
    + `ec2:CreateSecurityGroup`
    + `ec2:DescribeSecurityGroups`
    + `ec2:DescribeNetworkInterfaces`
    + `ec2:CreateNetworkInterface`
    + `ec2:CreateNetworkInterfacePermission`
    + `ec2:DetachNetworkInterfaces`
    + `ec2:DeleteNetworkInterface`
    + `ec2:DeleteNetworkInterfacePermission`
    + `ec2:CreateTags`
    + `ec2:AuthorizeSecurityGroupEgress`
    + `ec2:AuthorizeSecurityGroupIngress`
    + `ec2:RevokeSecurityGroupIngress`
    + `ec2:RevokeSecurityGroupEgress`
    + `ec2:DeleteSecurityGroup`
    + `datasync:CreateLocationEfs`
    + `datasync:CreateLocationS3`
    + `datasync:CreateTask`
    + `datasync:StartTaskExecution`
    + `datasync:DescribeTaskExecution`
    + `datasync:DeleteTask`
    + `datasync:DeleteLocation`
    + `sagemaker:CreateStudioLifecycleConfig`
    + `sagemaker:UpdateDomain`
    + `s3:ListBucket`
    + `s3:GetObject`
+ Access to AWS services from a terminal environment on either:
  + Your local machine using the AWS CLI version `2.13+`. Use the following command to verify the AWS CLI version.

    ```
    aws --version
    ```
  + AWS CloudShell. For more information, see [What is AWS CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html)
+ From your local machine or AWS CloudShell, run the following command and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html).

  ```
  aws configure
  ```
+ Verify that the lightweight JSON processor, jq, is installed in the terminal environment. jq is required to parse AWS CLI responses.

  ```
  jq --version
  ```

  If jq is not installed, install it using one of the following commands:
  + 

    ```
    sudo apt-get install -y jq
    ```
  + 

    ```
    sudo yum install -y jq
    ```

# Migrate the UI from Studio Classic to Studio
<a name="studio-updated-migrate-ui"></a>

The first phase for migrating an existing domain involves migrating the UI from Amazon SageMaker Studio Classic to Amazon SageMaker Studio. This phase does not include the migration of data. Users can continue working with their data the same way as they were before migration. For information about migrating data, see [(Optional) Migrate data from Studio Classic to Studio](studio-updated-migrate-data.md).

Phase 1 consists of the following steps:

1. Update application creation permissions for new applications available in Studio.

1. Update the VPC configuration for the domain.

1. Upgrade the domain to use the Studio UI.

## Prerequisites
<a name="studio-updated-migrate-ui-prereq"></a>

Before running these steps, complete the prerequisites in [Complete prerequisites to migrate the Studio experience](studio-updated-migrate-prereq.md).

## Step 1: Update application creation permissions
<a name="studio-updated-migrate-limit-apps"></a>

Before migrating the domain, update the domain's execution role to grant users permissions to create applications.

1. Create an AWS Identity and Access Management policy with one of the following contents by following the steps in [Creating IAM policies](https://docs.aws.amazon.com//IAM/latest/UserGuide/access_policies_create.html): 
   + Use the following policy to grant permissions for all application types and spaces.
**Note**  
If the domain uses the `SageMakerFullAccess` policy, you do not need to perform this action. `SageMakerFullAccess` grants permissions to create all applications.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "SMStudioUserProfileAppPermissionsCreateAndDelete",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreateApp",
                     "sagemaker:DeleteApp"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:app/*",
                 "Condition": {
                     "Null": {
                         "sagemaker:OwnerUserProfileArn": "true"
                     }
                 }
             },
             {
                 "Sid": "SMStudioCreatePresignedDomainUrlForUserProfile",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreatePresignedDomainUrl"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
             },
             {
                 "Sid": "SMStudioAppPermissionsListAndDescribe",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:ListApps",
                     "sagemaker:ListDomains",
                     "sagemaker:ListUserProfiles",
                     "sagemaker:ListSpaces",
                     "sagemaker:DescribeApp",
                     "sagemaker:DescribeDomain",
                     "sagemaker:DescribeUserProfile",
                     "sagemaker:DescribeSpace"
                 ],
                 "Resource": "*"
             },
             {
                 "Sid": "SMStudioAppPermissionsTagOnCreate",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:AddTags"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:*/*",
                 "Condition": {
                     "Null": {
                         "sagemaker:TaggingAction": "false"
                     }
                 }
             },
             {
                 "Sid": "SMStudioRestrictSharedSpacesWithoutOwners",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreateSpace",
                     "sagemaker:UpdateSpace",
                     "sagemaker:DeleteSpace"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:space/${sagemaker:DomainId}/*",
                 "Condition": {
                     "Null": {
                         "sagemaker:OwnerUserProfileArn": "true"
                     }
                 }
             },
             {
                 "Sid": "SMStudioRestrictSpacesToOwnerUserProfile",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreateSpace",
                     "sagemaker:UpdateSpace",
                     "sagemaker:DeleteSpace"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:space/${sagemaker:DomainId}/*",
                 "Condition": {
                     "ArnLike": {
                         "sagemaker:OwnerUserProfileArn": "arn:aws:sagemaker:us-east-1:111122223333:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
                     },
                     "StringEquals": {
                         "sagemaker:SpaceSharingType": [
                             "Private",
                             "Shared"
                         ]
                     }
                 }
             },
             {
                 "Sid": "SMStudioRestrictCreatePrivateSpaceAppsToOwnerUserProfile",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreateApp",
                     "sagemaker:DeleteApp"
                 ],
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:app/${sagemaker:DomainId}/*",
                 "Condition": {
                     "ArnLike": {
                         "sagemaker:OwnerUserProfileArn": "arn:aws:sagemaker:us-east-1:111122223333:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
                     },
                     "StringEquals": {
                         "sagemaker:SpaceSharingType": [
                             "Private"
                         ]
                     }
                 }
             },
             {
                 "Sid": "AllowAppActionsForSharedSpaces",
                 "Effect": "Allow",
                 "Action": [
                     "sagemaker:CreateApp",
                     "sagemaker:DeleteApp"
                 ],
                 "Resource": "arn:aws:sagemaker:*:*:app/${sagemaker:DomainId}/*/*/*",
                 "Condition": {
                     "StringEquals": {
                         "sagemaker:SpaceSharingType": [
                             "Shared"
                         ]
                     }
                 }
             }
         ]
     }
     ```

------
   + Because Studio shows an expanded set of applications, users may have access to applications that weren't displayed before. Administrators can limit access to these default applications by creating an AWS Identity and Access Management (IAM) policy that grants denies permissions for some applications to specific users.
**Note**  
Application type can be either `jupyterlab` or `codeeditor`.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "DenySageMakerCreateAppForSpecificAppTypes",
                 "Effect": "Deny",
                 "Action": "sagemaker:CreateApp",
                 "Resource": "arn:aws:sagemaker:us-east-1:111122223333:app/domain-id/*/app-type/*"
             }
         ]
     }
     ```

------

1. Attach the policy to the execution role of the domain. For instructions, follow the steps in [Adding IAM identity permissions (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console).

## Step 2: Update VPC configuration
<a name="studio-updated-migrate-vpc"></a>

If you use your domain in `VPC-Only` mode, ensure your VPC configuration meets the requirements for using Studio in `VPC-Only` mode. For more information, see [Connect Amazon SageMaker Studio in a VPC to External Resources](studio-updated-and-internet-access.md).

## Step 3: Upgrade to the Studio UI
<a name="studio-updated-migrate-set-studio-updated"></a>

Before you migrate your existing domain from Studio Classic to Studio, we recommend creating a test domain using Studio with the same configurations as your existing domain.

### (Optional) Create a test domain
<a name="studio-updated-migrate-ui-create-test"></a>

Use this test domain to interact with Studio, test out networking configurations, and launch applications, before migrating the existing domain.

1. Get the domain ID of your existing domain.

   1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

   1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

   1. Choose the existing domain.

   1. On the **Domain details** page, choose the **Domain settings** tab.

   1. Copy the **Domain ID**.

1. Add the domain ID of your existing domain.

   ```
   export REF_DOMAIN_ID="domain-id"
   export SM_REGION="region"
   ```

1. Use `describe-domain` to get important information about the existing domain.

   ```
   export REF_EXECROLE=$(aws sagemaker describe-domain --region=$SM_REGION --domain-id=$REF_DOMAIN_ID | jq -r '.DefaultUserSettings.ExecutionRole')
   export REF_VPC=$(aws sagemaker describe-domain --region=$SM_REGION --domain-id=$REF_DOMAIN_ID | jq -r '.VpcId')
   export REF_SIDS=$(aws sagemaker describe-domain --region=$SM_REGION --domain-id=$REF_DOMAIN_ID | jq -r '.SubnetIds | join(",")')
   export REF_SGS=$(aws sagemaker describe-domain --region=$SM_REGION --domain-id=$REF_DOMAIN_ID | jq -r '.DefaultUserSettings.SecurityGroups | join(",")')
   export AUTHMODE=$(aws sagemaker describe-domain --region=$SM_REGION --domain-id=$REF_DOMAIN_ID | jq -r '.AuthMode')
   ```

1. Validate the parameters.

   ```
   echo "Execution Role: $REF_EXECROLE || VPCID: $REF_VPC || SubnetIDs: $REF_SIDS || Security GroupIDs: $REF_SGS || AuthMode: $AUTHMODE"
   ```

1. Create a test domain using the configurations from the existing domain.

   ```
   IFS=',' read -r -a subnet_ids <<< "$REF_SIDS"
   IFS=',' read -r -a security_groups <<< "$REF_SGS"
   security_groups_json=$(printf '%s\n' "${security_groups[@]}" | jq -R . | jq -s .)
   
   aws sagemaker create-domain \
   --domain-name "TestV2Config" \
   --vpc-id $REF_VPC \
   --auth-mode $AUTHMODE \
   --subnet-ids "${subnet_ids[@]}" \
   --app-network-access-type VpcOnly \
   --default-user-settings "
   {
       \"ExecutionRole\": \"$REF_EXECROLE\",
       \"StudioWebPortal\": \"ENABLED\",
       \"DefaultLandingUri\": \"studio::\",
       \"SecurityGroups\": $security_groups_json
   }
   "
   ```

1. After the test domain is `In Service`, use the test domain's ID to create a user profile. This user profile is used to launch and test applications.

   ```
   aws sagemaker create-user-profile \
   --region="$SM_REGION" --domain-id=test-domain-id \
   --user-profile-name test-network-user
   ```

#### Test Studio functionality
<a name="studio-updated-migrate-ui-testing"></a>

Launch the test domain using the `test-network-user` user profile. We suggest that you thoroughly test the Studio UI and create applications to test Studio functionality in `VPCOnly` mode. Test the following workflows:
+ Create a new JupyterLab Space, test environment and connectivity.
+ Create a new Code Editor, based on Code-OSS, Visual Studio Code - Open Source Space, test environment and connectivity.
+ Launch a new Studio Classic App, test environment and connectivity.
+ Test Amazon Simple Storage Service connectivity with test read and write actions.

If these tests are successful, then upgrade the existing domain. If you encounter any failures, we recommended fixing your environment and connectivity issues before updating the existing domain.

#### Clean up test domain resources
<a name="studio-updated-migrate-ui-clean"></a>

After you have migrated the existing domain, clean up test domain resources.

1. Add the test domain's ID.

   ```
   export TEST_DOMAIN="test-domain-id"
   export SM_REGION="region"
   ```

1. List all applications in the domain that are in a running state.

   ```
   active_apps_json=$(aws sagemaker list-apps --region=$SM_REGION --domain-id=$TEST_DOMAIN)
   echo $active_apps_json
   ```

1. Parse the JSON list of running applications and delete them. If users attempted to create an application that they do not have permissions for, there may be spaces that are not captured in the following script. You must manually delete these spaces.

   ```
   echo "$active_apps_json" | jq -c '.Apps[]' | while read -r app;
   do
       if echo "$app" | jq -e '. | has("SpaceName")' > /dev/null;
       then
           app_type=$(echo "$app" | jq -r '.AppType')
           app_name=$(echo "$app" | jq -r '.AppName')
           domain_id=$(echo "$app" | jq -r '.DomainId')
           space_name=$(echo "$app" | jq -r '.SpaceName')
   
           echo "Deleting App - AppType: $app_type || AppName: $app_name || DomainId: $domain_id || SpaceName: $space_name"
           aws sagemaker delete-app --region=$SM_REGION --domain-id=$domain_id \
           --app-type $app_type --app-name $app_name --space-name $space_name
   
           echo "Deleting Space - AppType: $app_type || AppName: $app_name || DomainId: $domain_id || SpaceName: $space_name"
           aws sagemaker delete-space --region=$SM_REGION --domain-id=$domain_id \
           --space-name $space_name
       else
   
           app_type=$(echo "$app" | jq -r '.AppType')
           app_name=$(echo "$app" | jq -r '.AppName')
           domain_id=$(echo "$app" | jq -r '.DomainId')
           user_profile_name=$(echo "$app" | jq -r '.UserProfileName')
   
           echo "Deleting Studio Classic - AppType: $app_type || AppName: $app_name || DomainId: $domain_id || UserProfileName: $user_profile_name"
           aws sagemaker delete-app --region=$SM_REGION --domain-id=$domain_id \
           --app-type $app_type --app-name $app_name --user-profile-name $user_profile_name
   
       fi
   
   done
   ```

1. Delete the test user profile.

   ```
   aws sagemaker delete-user-profile \
   --region=$SM_REGION --domain-id=$TEST_DOMAIN \
   --user-profile-name "test-network-user"
   ```

1. Delete the test domain.

   ```
   aws sagemaker delete-domain \
   --region=$SM_REGION --domain-id=$TEST_DOMAIN
   ```

After you have tested Studio functionality with the configurations in your test domain, migrate the existing domain. When Studio is the default experience for a domain, Studio is the default experience for all users in the domain. However, the user settings takes precedence over the domain settings. Therefore, if a user has their default experience set to Studio Classic in their user settings, then that user will have Studio Classic as their default experience. 

You can migrate the existing domain by updating it from the SageMaker AI console, the AWS CLI, or AWS CloudFormation. Choose one of the following tabs to view the relevant instructions.

### Set Studio as the default experience for the existing domain using the SageMaker AI console
<a name="studio-updated-migrate-set-studio-updated-console"></a>

You can set Studio as the default experience for the existing domain by using the SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane expand **Admin configurations** and choose **Domains**. 

1. Choose the existing domain that you want to enable Studio as the default experience for.

1. On the **Domain details** page expand **Enable the new Studio**.

1. (Optional) To view the details about the steps involved in enabling Studio as your default experience, choose **View details**. The page shows the following.
   + In the **SageMaker Studio Overview** section you can view the applications that are included or available in the Studio web-based interface. 
   + In the **Enablement process** section you can view descriptions of the workflow tasks to enable Studio.
**Note**  
You will need to migrate your data manually. For instructions about migrating your data, see [(Optional) Migrate data from Studio Classic to Studio](studio-updated-migrate-data.md).
   + In the **Revert to Studio Classic experience** section you can view how to revert back to Studio Classic after enabling Studio as your default experience.

1. To begin the process to enable Studio as your default experience, choose **Enable the new Studio**.

1. In the **Specify and configure role** section, you can view the default applications that are automatically included in Studio.

   To prevent users from running these applications, choose the AWS Identity and Access Management (IAM) Role that has an IAM policy that denies access. For information about how to create a policy to limit access, see [Step 1: Update application creation permissions](#studio-updated-migrate-limit-apps).

1. In the **Choose default S3 bucket to attach CORS policy** section, you can give Studio access to Amazon S3 buckets. The default Amazon S3 bucket, in this case, is the default Amazon S3 bucket for your Studio Classic. In this step you can do the following:
   + Verify the domain’s default Amazon S3 bucket to attach the CORS policy to. If your domain does not have a default Amazon S3 bucket, SageMaker AI creates an Amazon S3 bucket with the correct CORS policy attached.
   + You can include 10 additional Amazon S3 buckets to attach the CORS policy to.

     If you wish to include more than 10 buckets, you can add them manually. For more information about manually attaching the CORS policy to your Amazon S3 buckets, see [(Optional) Update your CORS policy to access Amazon S3 buckets](#studio-updated-migrate-cors).

   To proceed, select the check box next to **Do you agree to overriding any existing CORS policy on the chosen Amazon S3 buckets?**.

1. The **Migrate data** section contains information about the different data storage volumes for Studio Classic and Studio. Your data will not be migrated automatically through this process. For instructions about migrating your data, lifecycle configurations, and JupyterLab extensions, see [(Optional) Migrate data from Studio Classic to Studio](studio-updated-migrate-data.md).

1. Once you have completed the tasks on the page and verified your configuration, choose **Enable the new Studio**.

### Set Studio as the default experience for the existing domain using the AWS CLI
<a name="studio-updated-migrate-set-studio-updated-cli"></a>

To set Studio as the default experience for the existing domain using the AWS CLI, use the [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html) call. You must set `ENABLED` as the value for `StudioWebPortal`, and set `studio::` as the value for `DefaultLandingUri` as part of the `default-user-settings` parameter. 

`StudioWebPortal` indicates if the Studio experience is the default experience and `DefaultLandingUri` indicates the default experience that the user is directed to when accessing the domain. In this example, setting these values on a domain level (in `default-user-settings`) makes Studio the default experience for users within the domain.

If a user within the domain has their `StudioWebPortal` set to `DISABLED` and `DefaultLandingUri` set to `app:JupyterServer:` on a user level (in `UserSettings`), this takes precedence over the domain settings. In other words, that user will have Studio Classic as their default experience, regardless of the domain settings. 

The following code example shows how to set Studio as the default experience for users within the domain:

```
aws sagemaker update-domain \
--domain-id existing-domain-id \
--region AWS Region \
--default-user-settings '
{
    "StudioWebPortal": "ENABLED",
    "DefaultLandingUri": "studio::"
}
'
```
+ To obtain your `existing-domain-id`, use the following instructions:

**To get `existing-domain-id`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the existing domain.

  1. On the **Domain details** page, choose the **Domain settings** tab.

  1. Copy the **Domain ID**.
+ To ensure you are using the correct AWS Region for your domain, use the following instructions: 

**To get `AWS Region`**

  1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

  1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

  1. Choose the existing domain.

  1. On the **Domain details** page, verify that this is the existing domain.

  1. Expand the AWS Region dropdown list from the top right of the SageMaker AI console, and use the corresponding AWS Region ID to the right of your AWS Region name. For example, `us-west-1`.

After you migrate your default experience to Studio, you can give Studio access to Amazon S3 buckets. For example, you can include access to your Studio Classic default Amazon S3 bucket and additional Amazon S3 buckets. To do so, you must manually attach a [Cross-Origin Resource Sharing](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) (CORS) configuration to the Amazon S3 buckets. For more information about how to manually attach the CORS policy to your Amazon S3 buckets, see [(Optional) Update your CORS policy to access Amazon S3 buckets](#studio-updated-migrate-cors).

Similarly, you can set Studio as the default experience when you create a domain from the AWS CLI using the [create-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-domain.html) call. 

### Set Studio as the default experience for the existing domain using the AWS CloudFormation
<a name="studio-updated-migrate-set-studio-updated-cloud-formation"></a>

You can set the default experience when creating a domain using the AWS CloudFormation. For an CloudFormation migration template, see [SageMaker Studio Administrator IaC Templates](https://github.com/aws-samples/sagemaker-studio-admin-iac-templates/tree/main?tab=readme-ov-file#phase-1-migration). For more information about creating a domain using CloudFormation, see [Creating Amazon SageMaker AI domain using CloudFormation](https://github.com/aws-samples/cloudformation-studio-domain?tab=readme-ov-file#creating-sagemaker-studio-domains-using-cloudformation).

For information about the domain resource supported by AWS CloudFormation, see [AWS::SageMaker AI::Domain](https://docs.aws.amazon.com//AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-domain.html#cfn-sagemaker-domain-defaultusersettings).

After you migrate your default experience to Studio, you can give Studio access to Amazon S3 buckets. For example, you can include access to your Studio Classic default Amazon S3 bucket and additional Amazon S3 buckets. To do so, you must manually attach a [Cross-Origin Resource Sharing](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) (CORS) configuration to the Amazon S3 buckets. For information about how to manually attach the CORS policy to your Amazon S3 buckets, see [(Optional) Update your CORS policy to access Amazon S3 buckets](#studio-updated-migrate-cors).

### (Optional) Update your CORS policy to access Amazon S3 buckets
<a name="studio-updated-migrate-cors"></a>

In Studio Classic, users can create, list, and upload files to Amazon Simple Storage Service (Amazon S3) buckets. To support the same experience in Studio, administrators must attach a [Cross-Origin Resource Sharing](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) (CORS) configuration to the Amazon S3 buckets. This is required because Studio makes Amazon S3 calls from the internet browser. The browser invokes CORS on behalf of users. As a result, all of the requests to Amazon S3 buckets fail unless the CORS policy is attached to the Amazon S3 buckets.

You may need to manually attach the CORS policy to Amazon S3 buckets for the following reasons.
+ If there is already an existing Amazon S3 default bucket that doesn’t have the correct CORS policy attached when you migrate the existing domain's default experience to Studio.
+ If you are using the AWS CLI to migrate the existing domain's default experience to Studio. For information about using the AWS CLI to migrate, see [Set Studio as the default experience for the existing domain using the AWS CLI](#studio-updated-migrate-set-studio-updated-cli).
+ If you want to attach the CORS policy to additional Amazon S3 buckets.

**Note**  
If you plan to use the SageMaker AI console to enable Studio as your default experience, the Amazon S3 buckets that you attach the CORS policy to will have their existing CORS policies overridden during the migration. For this reason, you can ignore the following manual instructions.  
However, if you have already used the SageMaker AI console to migrate and want to include more Amazon S3 buckets to attach the CORS policy to, then continue with the following manual instructions.

The following procedure shows how to manually add a CORS configuration to an Amazon S3 bucket.

**To add a CORS configuration to an Amazon S3 bucket**

1. Verify that there is an Amazon S3 bucket in the same AWS Region as the existing domain with the following name. For instructions, see [Viewing the properties for an Amazon S3 bucket](https://docs.aws.amazon.com//AmazonS3/latest/userguide/view-bucket-properties.html). 

   ```
   sagemaker-region-account-id
   ```

1. Add a CORS configuration with the following content to the default Amazon S3 bucket. For instructions, see [Configuring cross-origin resource sharing (CORS)](https://docs.aws.amazon.com//AmazonS3/latest/userguide/enabling-cors-examples.html).

   ```
   [
       {
           "AllowedHeaders": [
               "*"
           ],
           "AllowedMethods": [
               "POST",
               "PUT",
               "GET",
               "HEAD",
               "DELETE"
           ],
           "AllowedOrigins": [
               "https://*.sagemaker.aws"
           ],
           "ExposeHeaders": [
               "ETag",
               "x-amz-delete-marker",
               "x-amz-id-2",
               "x-amz-request-id",
               "x-amz-server-side-encryption",
               "x-amz-version-id"
           ]
       }
   ]
   ```

### (Optional) Migrate from Data Wrangler in Studio Classic to SageMaker Canvas
<a name="studio-updated-migrate-dw"></a>

Amazon SageMaker Data Wrangler exists as its own feature in the Studio Classic experience. When you enable Studio as your default experience, use the [Amazon SageMaker Canvas](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas.html) application to access Data Wrangler functionality. SageMaker Canvas is an application in which you can train and deploy machine learning models without writing any code, and Canvas provides data preparation features powered by Data Wrangler.

The new Studio experience doesn’t support the classic Data Wrangler UI, and you must create a Canvas application if you want to continue using Data Wrangler. However, you must have the necessary permissions to create and use Canvas applications.

Complete the following steps to attach the necessary permissions policies to your SageMaker AI domain's or user’s AWS IAM role.

**To grant permissions for Data Wrangler functionality inside Canvas**

1. Attach the AWS managed policy [AmazonSageMakerFullAccess](https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonSageMakerFullAccess) to your user’s IAM role. For a procedure that shows you how to attach IAM policies to a role, see [Adding IAM identity permissions (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) in the *AWS IAM User Guide.*

   If this permissions policy is too permissive for your use case, you can create scoped-down policies that include at least the following permissions:

   ```
   {
       "Sid": "AllowStudioActions",
       "Effect": "Allow",
       "Action": [
           "sagemaker:CreatePresignedDomainUrl",
           "sagemaker:DescribeDomain",
           "sagemaker:ListDomains",
           "sagemaker:DescribeUserProfile",
           "sagemaker:ListUserProfiles",
           "sagemaker:DescribeSpace",
           "sagemaker:ListSpaces",
           "sagemaker:DescribeApp",
           "sagemaker:ListApps"
       ],
       "Resource": "*"
   },
   {
       "Sid": "AllowAppActionsForUserProfile",
       "Effect": "Allow",
       "Action": [
           "sagemaker:CreateApp",
           "sagemaker:DeleteApp"
       ],
       "Resource": "arn:aws:sagemaker:region:account-id:app/domain-id/user-profile-name/canvas/*",
       "Condition": {
           "Null": {
               "sagemaker:OwnerUserProfileArn": "true"
           }
       }
   }
   ```

1. Attach the AWS managed policy [AmazonSageMakerCanvasDataPrepFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSageMakerCanvasDataPrepFullAccess.html) to your user’s IAM role.

After attaching the necessary permissions, you can create a Canvas application and log in. For more information, see [Getting started with using Amazon SageMaker Canvas](canvas-getting-started.md).

When you’ve logged into Canvas, you can directly access Data Wrangler and begin creating data flows. For more information, see [Data preparation](canvas-data-prep.md) in the Canvas documentation.

### (Optional) Migrate from Autopilot in Studio Classic to SageMaker Canvas
<a name="studio-updated-migrate-autopilot"></a>

[Amazon SageMaker Autopilot](https://docs.aws.amazon.com/sagemaker/AWSIronmanApiDoc/integ/npepin-studio-migration-autopilot-to-canvas/latest/dg/autopilot-automate-model-development.html) exists as its own feature in the Studio Classic experience. When you migrate to using the updated Studio experience, use the [Amazon SageMaker Canvas](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas.html) application to continue using the same automated machine learning (AutoML) capabilities via a user interface (UI). SageMaker Canvas is an application in which you can train and deploy machine learning models without writing any code, and Canvas provides a UI to run your AutoML tasks.

The new Studio experience doesn’t support the classic Autopilot UI. You must create a Canvas application if you want to continue using Autopilot's AutoML features via a UI. 

However, you must have the necessary permissions to create and use Canvas applications.
+ If you are accessing SageMaker Canvas from Studio, add those permissions to the execution role of your SageMaker AI domain or user profile.
+ If you are accessing SageMaker Canvas from the Console, add those permissions to your user’s AWS IAM role.
+ If you are accessing SageMaker Canvas via a [presigned URL](https://docs.aws.amazon.com/sagemaker/latest/dg/setting-up-canvas-sso.html#canvas-optional-access), add those permissions to the IAM role that you're using for Okta SSO access.

To enable AutoML capabilities in Canvas, add the following policies to your execution role or IAM user role.
+ AWS managed policy: [`CanvasFullAccess`.](https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam-awsmanpol-canvas.html#security-iam-awsmanpol-AmazonSageMakerCanvasFullAccess) 
+ Inline policy:

  ```
  {
      "Sid": "AllowAppActionsForUserProfile",
      "Effect": "Allow",
      "Action": [
          "sagemaker:CreateApp",
          "sagemaker:DeleteApp"
      ],
      "Resource": "arn:aws:sagemaker:region:account-id:app/domain-id/user-profile-name/canvas/*",
      "Condition": {
          "Null": {
              "sagemaker:OwnerUserProfileArn": "true"
          }
      }
  }
  ```

**To attach IAM policies to an execution role**

1. 

**Find the execution role attached to your SageMaker AI user profile**

   1. In the SageMaker AI console [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/), navigate to **Domains**, then choose your SageMaker AI domain.

   1. The execution role ARN is listed under *Execution role* on the **User Details** page of your user profile. Make note of the execution role name in the ARN.

   1. In the IAM console [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), choose **Roles**.

   1. Search for your role by name in the search field.

   1. Select the role.

1. Add policies to the role

   1. In the IAM console [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), choose **Roles**.

   1. Search for your role by name in the search field.

   1. Select the role.

   1. In the **Permissions** tab, navigate to the dropdown menu **Add permissions**.

   1. 
      + For managed policies: Select **Attach policies**, search for the name of the manage policy you want to attach.

        Select the policy then choose **Add permissions**.
      + For inline policies: Select **Create inline policy**, paste your policy in the JSON tab, choose next, name your policy, and choose **Create**.

For a procedure that shows you how to attach IAM policies to a role, see [Adding IAM identity permissions (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) in the *AWS IAM User Guide.*

After attaching the necessary permissions, you can create a Canvas application and log in. For more information, see [Getting started with using Amazon SageMaker Canvas](canvas-getting-started.md).

## Set Studio Classic as the default experience
<a name="studio-updated-migrate-revert"></a>

Administrators can revert to Studio Classic as the default experience for an existing domain. This can be done through the AWS CLI.

**Note**  
When Studio Classic is set as the default experience on a domain level, Studio Classic is the default experience for all users in the domain. However, settings on a user level takes precedence over the domain level settings. So if a user has their default experience set to Studio, then that user will have Studio as their default experience. 

To revert to Studio Classic as the default experience for the existing domain using the AWS CLI, use the [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html) call. As part of the `default-user-settings` field, you must set:
+ `StudioWebPortal` value to `DISABLED`.
+ `DefaultLandingUri` value to `app:JupyterServer:`

`StudioWebPortal` indicates if the Studio experience is the default experience and `DefaultLandingUri` indicates the default experience that the user is directed to when accessing the domain. In this example, setting these values on a domain level (in `default-user-settings`) makes Studio Classic the default experience for users within the domain.

If a user within the domain has their `StudioWebPortal` set to `ENABLED` and `DefaultLandingUri` set to `studio::` on a user level (in `UserSettings`), this takes precedence over the domain level settings. In other words, that user will have Studio as their default experience, regardless of the domain level settings. 

The following code example shows how to set Studio Classic as the default experience for users within the domain:

```
aws sagemaker update-domain \
--domain-id existing-domain-id \
--region AWS Region \
--default-user-settings '
{
    "StudioWebPortal": "DISABLED",
    "DefaultLandingUri": "app:JupyterServer:"
}
'
```

Use the following instructions to obtain your `existing-domain-id`.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

1. Choose the existing domain.

1. On the **Domain details** page, choose the **Domain settings** tab.

1. Copy the **Domain ID**.

To obtain your `AWS Region`, use the following instructions to ensure you are using the correct AWS Region for your domain.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left navigation pane, expand **Admin configurations** and choose **Domains**. 

1. Choose the existing domain.

1. On the **Domain details** page, verify that this is the existing domain.

1. Expand the AWS Region dropdown list from the top right of the SageMaker AI console, and use the corresponding AWS Region ID to the right of your AWS Region name. For example, `us-west-1`.

# (Optional) Migrate custom images and lifecycle configurations
<a name="studio-updated-migrate-lcc"></a>

You must update your custom images and lifecycle configuration (LCC) scripts to work with the simplified local run model in Amazon SageMaker Studio. If you have not created custom images or lifecycle configurations in your domain, skip this phase.

Amazon SageMaker Studio Classic operates in a split environment with:
+ A `JupyterServer` application running the Jupyter Server. 
+ Studio Classic notebooks running on one or more `KernelGateway` applications. 

Studio has shifted away from a split environment. Studio runs the JupyterLab and Code Editor, based on Code-OSS, Visual Studio Code - Open Source applications in a local runtime model. For more information about the change in architecture, see [Boost productivity on Amazon SageMaker Studio](https://aws.amazon.com/blogs//machine-learning/boost-productivity-on-amazon-sagemaker-studio-introducing-jupyterlab-spaces-and-generative-ai-tools/). 

## Migrate custom images
<a name="studio-updated-migrate-lcc-custom"></a>

Your existing Studio Classic custom images may not work in Studio. We recommend creating a new custom image that satisfies the requirements for use in Studio. The release of Studio simplifies the process to build custom images by providing [SageMaker Studio image support policy](sagemaker-distribution.md). SageMaker AI Distribution images include popular libraries and packages for machine learning, data science, and data analytics visualization. For a list of base SageMaker Distribution images and Amazon Elastic Container Registry account information, see [Amazon SageMaker Images Available for Use With Studio Classic Notebooks](notebooks-available-images.md).

To build a custom image, complete one of the following.
+ Extend a SageMaker Distribution image with custom packages and modules. These images are pre-configured with JupyterLab and Code Editor, based on Code-OSS, Visual Studio Code - Open Source.
+ Build a custom Dockerfile file by following the instructions in [Bring your own image (BYOI)](studio-updated-byoi.md). You must install JupyterLab and the open source CodeServer on the image to make it compatible with Studio.

## Migrate lifecycle configurations
<a name="studio-updated-migrate-lcc-lcc"></a>

Because of the simplified local runtime model in Studio, we recommend migrating the structure of your existing Studio Classic LCCs. In Studio Classic, you often have to create separate lifecycle configurations for both KernelGateway and JupyterServer applications. Because the JupyterServer and KernelGateway applications run on separate compute resources within Studio Classic, Studio Classic LCCs can be one of either type: 
+ JupyterServer LCC: These LCCs mostly govern a user’s home actions, including setting proxy, creating environment variables, and auto-shutdown of resources.
+ KernelGateway LCC: These LCCs govern Studio Classic notebook environment optimizations. This includes updating numpy package versions in the `Data Science 3.0` kernel and installing the snowflake package in `Pytorch 2.0 GPU` kernel.

In the simplified Studio architecture, you only need one LCC script that runs at application start up. While migration of your LCC scripts varies based on development environment, we recommend combining JupyterServer and KernelGateway LCCs to build a combined LCC.

LCCs in Studio can be associated with one of the following applications: 
+ JupyterLab 
+ Code Editor

Users can select the LCC for the respective application type when creating a space or use the default LCC set by the admin.

**Note**  
Existing Studio Classic auto-shutdown scripts do not work with Studio. For an example Studio auto-shutdown script, see [SageMaker Studio Lifecycle Configuration examples](https://github.com/aws-samples/sagemaker-studio-apps-lifecycle-config-examples).

### Considerations when refactoring LCCs
<a name="studio-updated-migrate-lcc-considerations"></a>

Consider the following differences between Studio Classic and Studio when refactoring your LCCs.
+ JupyterLab and Code Editor applications, when created, are run as `sagemaker-user` with `UID:1001` and `GID:101`. By default, `sagemaker-user` has permissions to assume sudo/root permissions. KernelGateway applications are run as `root` by default.
+ SageMaker Distribution images that run inside JupyterLab and Code Editor apps use the Debian-based package manager, `apt-get`.
+ Studio JupyterLab and Code Editor applications use the Conda package manager. SageMaker AI creates a single base Python3 Conda environment when a Studio application is launched. For information about updating packages in the base Conda environment and creating new Conda environments, see [JupyterLab user guide](studio-updated-jl-user-guide.md). In contrast, not all KernelGateway applications use Conda as a package manager.
+ The Studio JupyterLab application uses `JupyterLab 4.0`, while Studio Classic uses `JupyterLab 3.0`. Validate that all JupyterLab extensions you use are compatible with `JupyterLab 4.0`. For more information about extensions, see [Extension Compatibility with JupyterLab 4.0](https://github.com/jupyterlab/jupyterlab/issues/14590).

# (Optional) Migrate data from Studio Classic to Studio
<a name="studio-updated-migrate-data"></a>

Studio Classic and Studio use two different types of storage volumes. Studio Classic uses a single Amazon Elastic File System (Amazon EFS) volume to store data across all users and shared spaces in the domain. In Studio, each space gets its own Amazon Elastic Block Store (Amazon EBS) volume. When you update the default experience of an existing domain, SageMaker AI automatically mounts a folder in an Amazon EFS volume for each user in a domain. As a result, users are able to access files from Studio Classic in their Studio applications. For more information, see [Amazon EFS auto-mounting in Studio](studio-updated-automount.md). 

You can also opt out of Amazon EFS auto-mounting and manually migrate the data to give users access to files from Studio Classic in Studio applications. To accomplish this, you must transfer the files from the user home directories to the Amazon EBS volumes associated with those spaces. The following section gives information about this workflow. For more information about opting out of Amazon EFS auto-mounting, see [Opt out of Amazon EFS auto-mounting](studio-updated-automount-optout.md).

## Manually migrate all of your data from Studio Classic
<a name="studio-updated-migrate-data-all"></a>

The following section describes how to migrate all of the data from your Studio Classic storage volume to the new Studio experience.

When manually migrating a user's data, code, and artifacts from Studio Classic to Studio, we recommend one of the following approaches:

1. Using a custom Amazon EFS volume

1. Using Amazon Simple Storage Service (Amazon S3)

If you used Amazon SageMaker Data Wrangler in Studio Classic and want to migrate your data flow files, then choose one of the following options for migration:
+ If you want to migrate all of the data from your Studio Classic storage volume, including your data flow files, go to [Manually migrate all of your data from Studio Classic](#studio-updated-migrate-data-all) and complete the section **Use Amazon S3 to migrate data**. Then, skip to the [Import the flow files into Canvas](#studio-updated-migrate-flows-import) section.
+ If you only want to migrate your data flow files and no other data from your Studio Classic storage volume, skip to the [Migrate data flows from Data Wrangler](#studio-updated-migrate-flows) section.

### Prerequisites
<a name="studio-updated-migrate-data-prereq"></a>

Before running these steps, complete the prerequisites in [Complete prerequisites to migrate the Studio experience](studio-updated-migrate-prereq.md). You must also complete the steps in [Migrate the UI from Studio Classic to Studio](studio-updated-migrate-ui.md).

### Choosing an approach
<a name="studio-updated-migrate-data-choose"></a>

Consider the following when choosing an approach to migrate your Studio Classic data.

** Pros and cons of using a custom Amazon EFS volume**

In this approach, you use an Amazon EFS-to-Amazon EFS AWS DataSync task (one time or cadence) to copy data, then mount the target Amazon EFS volume to a user’s spaces. This gives users access to data from Studio Classic in their Studio compute environments.

Pros:
+ Only the user’s home directory data is visible in the user's spaces. There is no data cross-pollination.
+ Syncing from the source Amazon EFS volume to a target Amazon EFS volume is safer than directly mounting the source Amazon EFS volume managed by SageMaker AI into spaces. This avoids the potential to impact home directory user files.
+ Users have the flexibility to continue working in Studio Classic and Studio applications, while having their data available in both applications if AWS DataSync is set up on a regular cadence.
+ No need for repeated push and pull with Amazon S3.

Cons:
+ No write access to the target Amazon EFS volume mounted to user's spaces. To get write access to the target Amazon EFS volume, customers would need to mount the target Amazon EFS volume to an Amazon Elastic Compute Cloud instance and provide appropriate permissions for users to write to the Amazon EFS prefix.
+ Requires modification to the security groups managed by SageMaker AI to allow network file system (NFS) inbound and outbound flow.
+ Costs more than using Amazon S3.
+ If [migrating data flows from Data Wrangler in Studio Classic](#studio-updated-migrate-flows), you must follow the steps for manually exporting flow files.

**Pros and cons of using Amazon S3**

In this approach, you use an Amazon EFS-to-Amazon S3 AWS DataSync task (one time or cadence) to copy data, then create a lifecycle configuration to copy the user’s data from Amazon S3 to their private space’s Amazon EBS volume.

Pros:
+ If the LCC is attached to the domain, users can choose to use the LCC to copy data to their space or to run the space with no LCC script. This gives users the choice to copy their files only to the spaces they need.
+ If an AWS DataSync task is set up on a cadence, users can restart their Studio application to get the latest files.
+ Because the data is copied over to Amazon EBS, users have write permissions on the files.
+ Amazon S3 storage is cheaper than Amazon EFS.
+ If [migrating data flows from Data Wrangler in Studio Classic](#studio-updated-migrate-flows), you can skip the manual export steps and directly import the data flows into SageMaker Canvas from Amazon S3.

Cons:
+ If administrators need to prevent cross-pollination, they must create AWS Identity and Access Management policies at the user level to ensure users can only access the Amazon S3 prefix that contains their files.

### Use a custom Amazon EFS volume to migrate data
<a name="studio-updated-migrate-data-approach1"></a>

In this approach, you use an Amazon EFS-to-Amazon EFS AWS DataSync to copy the contents of a Studio Classic Amazon EFS volume to a target Amazon EFS volume once or in a regular cadence, then mount the target Amazon EFS volume to a user’s spaces. This gives users access to data from Studio Classic in their Studio compute environments.

1. Create a target Amazon EFS volume. You will transfer data into this Amazon EFS volume and mount it to a corresponding user's space using prefix-level mounting.

   ```
   export SOURCE_DOMAIN_ID="domain-id"
   export AWS_REGION="region"
   
   export TARGET_EFS=$(aws efs create-file-system --performance-mode generalPurpose --throughput-mode bursting --encrypted --region $REGION | jq -r '.FileSystemId')
   
   echo "Target EFS volume Created: $TARGET_EFS"
   ```

1. Add variables for the source Amazon EFS volume currently attached to the domain and used by all users. The domain's Amazon Virtual Private Cloud information is required to ensure the target Amazon EFS is created in the same Amazon VPC and subnet, with the same security group configuration.

   ```
   export SOURCE_EFS=$(aws sagemaker describe-domain --domain-id $SOURCE_DOMAIN_ID | jq -r '.HomeEfsFileSystemId')
   export VPC_ID=$(aws sagemaker describe-domain --domain-id $SOURCE_DOMAIN_ID | jq -r '.VpcId')
   
   echo "EFS managed by SageMaker: $SOURCE_EFS | VPC: $VPC_ID"
   ```

1. Create an Amazon EFS mount target in the same Amazon VPC and subnet as the source Amazon EFS volume, with the same security group configuration. The mount target takes a few minutes to be available.

   ```
   export EFS_VPC_ID=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].VpcId")
   export EFS_AZ_NAME=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].AvailabilityZoneName")
   export EFS_AZ_ID=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].AvailabilityZoneId")
   export EFS_SUBNET_ID=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].SubnetId")
   export EFS_MOUNT_TARG_ID=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].MountTargetId")
   export EFS_SG_IDS=$(aws efs describe-mount-target-security-groups --mount-target-id $EFS_MOUNT_TARG_ID | jq -r '.SecurityGroups[]')
   
   aws efs create-mount-target \
   --file-system-id $TARGET_EFS \
   --subnet-id $EFS_SUBNET_ID \
   --security-groups $EFS_SG_IDS
   ```

1. Create Amazon EFS source and destination locations for the AWS DataSync task.

   ```
   export SOURCE_EFS_ARN=$(aws efs describe-file-systems --file-system-id $SOURCE_EFS | jq -r ".FileSystems[0].FileSystemArn")
   export TARGET_EFS_ARN=$(aws efs describe-file-systems --file-system-id $TARGET_EFS | jq -r ".FileSystems[0].FileSystemArn")
   export EFS_SUBNET_ID_ARN=$(aws ec2 describe-subnets --subnet-ids $EFS_SUBNET_ID | jq -r ".Subnets[0].SubnetArn")
   export ACCOUNT_ID=$(aws ec2 describe-security-groups --group-id $EFS_SG_IDS | jq -r ".SecurityGroups[0].OwnerId")
   export EFS_SG_ID_ARN=arn:aws:ec2:$REGION:$ACCOUNT_ID:security-group/$EFS_SG_IDS
   
   export SOURCE_LOCATION_ARN=$(aws datasync create-location-efs --subdirectory "/" --efs-filesystem-arn $SOURCE_EFS_ARN --ec2-config SubnetArn=$EFS_SUBNET_ID_ARN,SecurityGroupArns=$EFS_SG_ID_ARN --region $REGION | jq -r ".LocationArn")
   export DESTINATION_LOCATION_ARN=$(aws datasync create-location-efs --subdirectory "/" --efs-filesystem-arn $TARGET_EFS_ARN --ec2-config SubnetArn=$EFS_SUBNET_ID_ARN,SecurityGroupArns=$EFS_SG_ID_ARN --region $REGION | jq -r ".LocationArn")
   ```

1. Allow traffic between the source and target network file system (NFS) mounts. When a new domain is created, SageMaker AI creates 2 security groups.
   + NFS inbound security group with only inbound traffic.
   + NFS outbound security group with only outbound traffic.

   The source and target NFS are placed inside the same security groups. You can allow traffic between these mounts from the AWS Management Console or AWS CLI.
   + Allow traffic from the AWS Management Console

     1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

     1. Choose **Security Groups**.

     1. Search for the existing domain's ID on the **Security Groups** page.

        ```
        d-xxxxxxx
        ```

        The results should return two security groups that include the domain ID in the name.
        + `security-group-for-inbound-nfs-domain-id`
        + `security-group-for-outbound-nfs-domain-id`

     1. Select the inbound security group ID. This opens a new page with details about the security group.

     1. Select the **Outbound Rules** tab.

     1. Select **Edit outbound rules**.

     1. Update the existing outbound rules or add a new outbound rule with the following values:
        + **Type**: NFS
        + **Protocol**: TCP
        + **Port range**: 2049
        + **Destination**: security-group-for-outbound-nfs-*domain-id* \$1 *security-group-id*

     1. Choose **Save rules**.

     1. Select the **Inbound Rules** tab.

     1. Select **Edit inbound rules**.

     1. Update the existing inbound rules or add a new outbound rule with the following values:
        + **Type**: NFS
        + **Protocol**: TCP
        + **Port range**: 2049
        + **Destination**: security-group-for-outbound-nfs-*domain-id* \$1 *security-group-id*

     1. Choose **Save rules**.
   + Allow traffic from the AWS CLI

     1.  Update the security group inbound and outbound rules with the following values:
        + **Protocol**: TCP
        + **Port range**: 2049
        + **Group ID**: Inbound security group ID or outbound security group ID

        ```
        export INBOUND_SG_ID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=security-group-for-inbound-nfs-$SOURCE_DOMAIN_ID" | jq -r ".SecurityGroups[0].GroupId")
        export OUTBOUND_SG_ID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=security-group-for-outbound-nfs-$SOURCE_DOMAIN_ID" | jq -r ".SecurityGroups[0].GroupId")
        
        echo "Outbound SG ID: $OUTBOUND_SG_ID | Inbound SG ID: $INBOUND_SG_ID"
        aws ec2 authorize-security-group-egress \
        --group-id $INBOUND_SG_ID \
        --protocol tcp --port 2049 \
        --source-group $OUTBOUND_SG_ID
        
        aws ec2 authorize-security-group-ingress \
        --group-id $OUTBOUND_SG_ID \
        --protocol tcp --port 2049 \
        --source-group $INBOUND_SG_ID
        ```

     1.  Add both the inbound and outbound security groups to the source and target Amazon EFS mount targets. This allows traffic between the 2 Amazon EFS mounts.

        ```
        export SOURCE_EFS_MOUNT_TARGET=$(aws efs describe-mount-targets --file-system-id $SOURCE_EFS | jq -r ".MountTargets[0].MountTargetId")
        export TARGET_EFS_MOUNT_TARGET=$(aws efs describe-mount-targets --file-system-id $TARGET_EFS | jq -r ".MountTargets[0].MountTargetId")
        
        aws efs modify-mount-target-security-groups \
        --mount-target-id $SOURCE_EFS_MOUNT_TARGET \
        --security-groups $INBOUND_SG_ID $OUTBOUND_SG_ID
        
        aws efs modify-mount-target-security-groups \
        --mount-target-id $TARGET_EFS_MOUNT_TARGET \
        --security-groups $INBOUND_SG_ID $OUTBOUND_SG_ID
        ```

1. Create a AWS DataSync task. This returns a task ARN that can be used to run the task on-demand or as part of a regular cadence.

   ```
   export EXTRA_XFER_OPTIONS='VerifyMode=ONLY_FILES_TRANSFERRED,OverwriteMode=ALWAYS,Atime=NONE,Mtime=NONE,Uid=NONE,Gid=NONE,PreserveDeletedFiles=REMOVE,PreserveDevices=NONE,PosixPermissions=NONE,TaskQueueing=ENABLED,TransferMode=CHANGED,SecurityDescriptorCopyFlags=NONE,ObjectTags=NONE'
   export DATASYNC_TASK_ARN=$(aws datasync create-task --source-location-arn $SOURCE_LOCATION_ARN --destination-location-arn $DESTINATION_LOCATION_ARN --name "SMEFS_to_CustomEFS_Sync" --region $REGION --options $EXTRA_XFER_OPTIONS | jq -r ".TaskArn")
   ```

1. Start a AWS DataSync task to automatically copy data from the source Amazon EFS to the target Amazon EFS mount. This does not retain the file's POSIX permissions, which allows users to read from the target Amazon EFS mount, but not write to it.

   ```
   aws datasync start-task-execution --task-arn $DATASYNC_TASK_ARN
   ```

1. Mount the target Amazon EFS volume on the domain at the root level.

   ```
   aws sagemaker update-domain --domain-id $SOURCE_DOMAIN_ID \
   --default-user-settings '{"CustomFileSystemConfigs": [{"EFSFileSystemConfig": {"FileSystemId": "'"$TARGET_EFS"'", "FileSystemPath": "/"}}]}'
   ```

1. Overwrite every user profile with a `FileSystemPath` prefix. The prefix includes the user’s UID, which is created by SageMaker AI. This ensure user’s only have access to their data and prevents cross-pollination. When a space is created in the domain and the target Amazon EFS volume is mounted to the application, the user’s prefix overwrites the domain prefix. As a result, SageMaker AI only mounts the `/user-id` directory on the user's application.

   ```
   aws sagemaker list-user-profiles --domain-id $SOURCE_DOMAIN_ID | jq -r '.UserProfiles[] | "\(.UserProfileName)"' | while read user; do
   export uid=$(aws sagemaker describe-user-profile --domain-id $SOURCE_DOMAIN_ID --user-profile-name $user | jq -r ".HomeEfsFileSystemUid")
   echo "$user $uid"
   aws sagemaker update-user-profile --domain-id $SOURCE_DOMAIN_ID --user-profile-name $user --user-settings '{"CustomFileSystemConfigs": [{"EFSFileSystemConfig":{"FileSystemId": "'"$TARGET_EFS"'", "FileSystemPath": "'"/$uid/"'"}}]}'
   done
   ```

1. Users can then select the custom Amazon EFS filesystem when launching an application. For more information, see [JupyterLab user guide](studio-updated-jl-user-guide.md) or [Launch a Code Editor application in Studio](code-editor-use-studio.md).

### Use Amazon S3 to migrate data
<a name="studio-updated-migrate-data-approach2"></a>

In this approach, you use an Amazon EFS-to-Amazon S3 AWS DataSync task to copy the contents of a Studio Classic Amazon EFS volume to an Amazon S3 bucket once or in a regular cadence, then create a lifecycle configuration to copy the user’s data from Amazon S3 to their private space’s Amazon EBS volume.

**Note**  
This approach only works for domains that have internet access.

1. Set the source Amazon EFS volume ID from the domain containing the data that you are migrating.

   ```
   timestamp=$(date +%Y%m%d%H%M%S)
   export SOURCE_DOMAIN_ID="domain-id"
   export AWS_REGION="region"
   export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   export EFS_ID=$(aws sagemaker describe-domain --domain-id $SOURCE_DOMAIN_ID | jq -r '.HomeEfsFileSystemId')
   ```

1. Set the target Amazon S3 bucket name. For information about creating an Amazon S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). The bucket used must have a CORS policy as described in [(Optional) Update your CORS policy to access Amazon S3 buckets](studio-updated-migrate-ui.md#studio-updated-migrate-cors). Users in the domain must also have permissions to access the Amazon S3 bucket.

   In this example, we are copying files to a prefix named `studio-new`. If you are using a single Amazon S3 bucket to migrate multiple domains, use the `studio-new/<domain-id>` prefix to restrict permissions to the files using IAM.

   ```
   export BUCKET_NAME=s3-bucket-name
   export S3_DESTINATION_PATH=studio-new
   ```

1. Create a trust policy that gives AWS DataSync permissions to assume the execution role of your account. 

   ```
   export TRUST_POLICY=$(cat <<EOF
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "datasync.amazonaws.com"
               },
               "Action": "sts:AssumeRole",
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "$ACCOUNT_ID"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:datasync:$REGION:$ACCOUNT_ID:*"
                   }
               }
           }
       ]
   }
   EOF
   )
   ```

1. Create an IAM role and attach the trust policy.

   ```
   export timestamp=$(date +%Y%m%d%H%M%S)
   export ROLE_NAME="DataSyncS3Role-$timestamp"
   
   aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document "$TRUST_POLICY"
   aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
   echo "Attached IAM Policy AmazonS3FullAccess"
   aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
   echo "Attached IAM Policy AmazonSageMakerFullAccess"
   export ROLE_ARN=$(aws iam get-role --role-name $ROLE_NAME --query 'Role.Arn' --output text)
   echo "Created IAM Role $ROLE_ARN"
   ```

1. Create a security group to give access to the Amazon EFS location.

   ```
   export EFS_ARN=$(aws efs describe-file-systems --file-system-id $EFS_ID | jq -r '.FileSystems[0].FileSystemArn' )
   export EFS_SUBNET_ID=$(aws efs describe-mount-targets --file-system-id $EFS_ID | jq -r '.MountTargets[0].SubnetId')
   export EFS_VPC_ID=$(aws efs describe-mount-targets --file-system-id $EFS_ID | jq -r '.MountTargets[0].VpcId')
   export MOUNT_TARGET_ID=$(aws efs describe-mount-targets --file-system-id $EFS_ID | jq -r '.MountTargets[0].MountTargetId ')
   export EFS_SECURITY_GROUP_ID=$(aws efs describe-mount-target-security-groups --mount-target-id $MOUNT_TARGET_ID | jq -r '.SecurityGroups[0]')
   export EFS_SUBNET_ARN=$(aws ec2 describe-subnets --subnet-ids $EFS_SUBNET_ID | jq -r '.Subnets[0].SubnetArn')
   echo "Subnet ID: $EFS_SUBNET_ID"
   echo "Security Group ID: $EFS_SECURITY_GROUP_ID"
   echo "Subnet ARN: $EFS_SUBNET_ARN"
   
   timestamp=$(date +%Y%m%d%H%M%S)
   sg_name="datasync-sg-$timestamp"
   export DATASYNC_SG_ID=$(aws ec2 create-security-group --vpc-id $EFS_VPC_ID --group-name $sg_name --description "DataSync SG" --output text --query 'GroupId')
   aws ec2 authorize-security-group-egress --group-id $DATASYNC_SG_ID --protocol tcp --port 2049 --source-group $EFS_SECURITY_GROUP_ID
   aws ec2 authorize-security-group-ingress --group-id $EFS_SECURITY_GROUP_ID --protocol tcp --port 2049 --source-group $DATASYNC_SG_ID
   export DATASYNC_SG_ARN="arn:aws:ec2:$REGION:$ACCOUNT_ID:security-group/$DATASYNC_SG_ID"
   echo "Security Group ARN: $DATASYNC_SG_ARN"
   ```

1. Create a source Amazon EFS location for the AWS DataSync task.

   ```
   export SOURCE_ARN=$(aws datasync create-location-efs --efs-filesystem-arn $EFS_ARN --ec2-config "{\"SubnetArn\": \"$EFS_SUBNET_ARN\", \"SecurityGroupArns\": [\"$DATASYNC_SG_ARN\"]}" | jq -r '.LocationArn')
   echo "Source Location ARN: $SOURCE_ARN"
   ```

1. Create a target Amazon S3 location for the AWS DataSync task.

   ```
   export BUCKET_ARN="arn:aws:s3:::$BUCKET_NAME"
   export DESTINATION_ARN=$(aws datasync create-location-s3 --s3-bucket-arn $BUCKET_ARN --s3-config "{\"BucketAccessRoleArn\": \"$ROLE_ARN\"}" --subdirectory $S3_DESTINATION_PATH | jq -r '.LocationArn')
   echo "Destination Location ARN: $DESTINATION_ARN"
   ```

1. Create a AWS DataSync task.

   ```
   export TASK_ARN=$(aws datasync create-task --source-location-arn $SOURCE_ARN --destination-location-arn $DESTINATION_ARN | jq -r '.TaskArn')
   echo "DataSync Task: $TASK_ARN"
   ```

1. Start the AWS DataSync task. This task automatically copies data from the source Amazon EFS volume to the target Amazon S3 bucket. Wait for the task to be complete.

   ```
   aws datasync start-task-execution --task-arn $TASK_ARN
   ```

1. Check the status of the AWS DataSync task to verify that it is complete. Pass the ARN returned in the previous step.

   ```
   export TASK_EXEC_ARN=datasync-task-arn
   echo "Task execution ARN: $TASK_EXEC_ARN"
   export STATUS=$(aws datasync describe-task-execution --task-execution-arn $TASK_EXEC_ARN | jq -r '.Status')
   echo "Execution status: $STATUS"
   while [ "$STATUS" = "QUEUED" ] || [ "$STATUS" = "LAUNCHING" ] || [ "$STATUS" = "PREPARING" ] || [ "$STATUS" = "TRANSFERRING" ] || [ "$STATUS" = "VERIFYING" ]; do
       STATUS=$(aws datasync describe-task-execution --task-execution-arn $TASK_EXEC_ARN | jq -r '.Status')
       if [ $? -ne 0 ]; then
           echo "Error Running DataSync Task"
           exit 1
       fi
       echo "Execution status: $STATUS"
       sleep 30
   done
   ```

1. After the AWS DataSync task is complete, clean up the previously created resources.

   ```
   aws datasync delete-task --task-arn $TASK_ARN
   echo "Deleted task $TASK_ARN"
   aws datasync delete-location --location-arn $SOURCE_ARN
   echo "Deleted location source $SOURCE_ARN"
   aws datasync delete-location --location-arn $DESTINATION_ARN
   echo "Deleted location source $DESTINATION_ARN"
   aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
   aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
   aws iam delete-role --role-name $ROLE_NAME
   echo "Deleted IAM Role $ROLE_NAME"
   echo "Wait 5 minutes for the elastic network interface to detach..."
   start_time=$(date +%s)
   while [[ $(($(date +%s) - start_time)) -lt 300 ]]; do
       sleep 1
   done
   aws ec2 revoke-security-group-ingress --group-id $EFS_SECURITY_GROUP_ID --protocol tcp --port 2049 --source-group $DATASYNC_SG_ID
   echo "Revoked Ingress from $EFS_SECURITY_GROUP_ID"
   aws ec2 revoke-security-group-egress --group-id $DATASYNC_SG_ID --protocol tcp --port 2049 --source-group $EFS_SECURITY_GROUP_ID
   echo "Revoked Egress from $DATASYNC_SG_ID"
   aws ec2 delete-security-group --group-id $DATASYNC_SG_ID
   echo "Deleted DataSync SG $DATASYNC_SG_ID"
   ```

1. From your local machine, create a file named `on-start.sh` with the following content. This script copies the user’s Amazon EFS home directory in Amazon S3 to the user’s Amazon EBS volume in Studio and creates a prefix for each user profile.

   ```
   #!/bin/bash
   set -eo pipefail
   
   sudo apt-get install -y jq
   
   # Studio Variables
   DOMAIN_ID=$(cat /opt/ml/metadata/resource-metadata.json | jq -r '.DomainId')
   SPACE_NAME=$(cat /opt/ml/metadata/resource-metadata.json | jq -r '.SpaceName')
   USER_PROFILE_NAME=$(aws sagemaker describe-space --domain-id=$DOMAIN_ID --space-name=$SPACE_NAME | jq -r '.OwnershipSettings.OwnerUserProfileName')
   
   # S3 bucket to copy from
   BUCKET=s3-bucket-name
   # Subfolder in bucket to copy
   PREFIX=studio-new
   
   # Getting HomeEfsFileSystemUid for the current user-profile
   EFS_FOLDER_ID=$(aws sagemaker describe-user-profile --domain-id $DOMAIN_ID --user-profile-name $USER_PROFILE_NAME | jq -r '.HomeEfsFileSystemUid')
   
   # Local destination directory
   DEST=./studio-classic-efs-backup
   mkdir -p $DEST
   
   echo "Bucket: s3://$BUCKET/$PREFIX/$EFS_FOLDER_ID/"
   echo "Destination $DEST/"
   echo "Excluding .*"
   echo "Excluding .*/*"
   
   aws s3 cp s3://$BUCKET/$PREFIX/$EFS_FOLDER_ID/ $DEST/ \
       --exclude ".*" \
       --exclude "**/.*" \
       --recursive
   ```

1. Convert your script into base64 format. This requirement prevents errors that occur from spacing and line break encoding. The script type can be either `JupyterLab` or `CodeEditor`.

   ```
   export LCC_SCRIPT_NAME='studio-classic-sync'
   export SCRIPT_FILE_NAME='on-start.sh'
   export SCRIPT_TYPE='JupyterLab-or-CodeEditor'
   LCC_CONTENT=`openssl base64 -A -in ${SCRIPT_FILE_NAME}`
   ```

1. Verify the following before you use the script: 
   + The Amazon EBS volume is large enough to store the objects that you're exporting.
   + You aren't migrating hidden files and folders, such as `.bashrc` and `.condarc` if you aren't intending to do so.
   + The AWS Identity and Access Management (IAM) execution role that's associated with Studio user profiles has the policies configured to access only the respective home directory in Amazon S3.

1. Create a lifecycle configuration using your script.

   ```
   aws sagemaker create-studio-lifecycle-config \
       --studio-lifecycle-config-name $LCC_SCRIPT_NAME \
       --studio-lifecycle-config-content $LCC_CONTENT \
       --studio-lifecycle-config-app-type $SCRIPT_TYPE
   ```

1. Attach the LCC to your domain.

   ```
   aws sagemaker update-domain \
       --domain-id $SOURCE_DOMAIN_ID \
       --default-user-settings '
           {"JupyterLabAppSettings":
               {"LifecycleConfigArns":
                   [
                       "lifecycle-config-arn"
                   ]
               }
           }'
   ```

1. Users can then select the LCC script when launching an application. For more information, see [JupyterLab user guide](studio-updated-jl-user-guide.md) or [Launch a Code Editor application in Studio](code-editor-use-studio.md). This automatically syncs the files from Amazon S3 to the Amazon EBS storage for the user's space.

## Migrate data flows from Data Wrangler
<a name="studio-updated-migrate-flows"></a>

If you have previously used Amazon SageMaker Data Wrangler in Amazon SageMaker Studio Classic for data preparation tasks, you can migrate to the new Amazon SageMaker Studio and access the latest version of Data Wrangler in Amazon SageMaker Canvas. Data Wrangler in SageMaker Canvas provides you with an enhanced user experience and access to the latest features, such as a natural language interface and faster performance.

You can onboard to SageMaker Canvas at any time to begin using the new Data Wrangler experience. For more information, see [Getting started with using Amazon SageMaker Canvas](canvas-getting-started.md).

If you have data flow files saved in Studio Classic that you were previously working on, you can onboard to Studio and then import the flow files into Canvas. You have the following options for migration:
+ One-click migration: When you sign in to Canvas, you can use a one-time import option that migrates all of your flow files on your behalf.
+ Manual migration: You can manually import your flow files into Canvas. From Studio Classic, either export the files to Amazon S3 or download them to your local machine. Then, you sign in to the SageMaker Canvas application, import the flow files, and continue your data preparation tasks.

The following guide describes the prerequisites to migration and how to migrate your data flow files using either the one-click or manual option.

### Prerequisites
<a name="studio-updated-migrate-flows-prereqs"></a>

Review the following prerequisites before you begin migrating your flow files.

**Step 1. Migrate the domain and grant permissions**

Before migrating data flow files, you need to follow specific steps of the [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md) guide to ensure that your user profile's AWS IAM execution role has the required permissions. Follow the [Prerequisites](studio-updated-migrate-prereq.md) and [Migrate the UI from Studio Classic to Studio](studio-updated-migrate-ui.md) before proceeding, which describe how to grant the required permissions, configure Studio as the new experience, and migrate your existing domain.

Specifically, you must have permissions to create a SageMaker Canvas application and use the SageMaker Canvas data preparation features. To obtain these permissions, you can either:
+ Add the [ AmazonSageMakerCanvasDataPrepFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSageMakerCanvasDataPrepFullAccess.html) policy to your IAM role, or
+ Attach a least-permissions policy, as shown in the **(Optional) Migrate from Data Wrangler in Studio Classic to SageMaker Canvas** section of the page [Migrate the UI from Studio Classic to Studio](studio-updated-migrate-ui.md).

Make sure to use the same user profile for both Studio and SageMaker Canvas.

After completing the prerequisites outlined in the migration guide, you should have a new domain with the required permissions to access SageMaker Canvas through Studio.

**Step 2. (Optional) Prepare an Amazon S3 location**

If you are doing a manual migration and plan to use Amazon S3 to transfer your flow files instead of using the local download option, you should have an Amazon S3 bucket in your account that you'd like to use for storing the flow files.

### One-click migration method
<a name="studio-updated-migrate-flows-auto"></a>

SageMaker Canvas offers a one-time import option for migrating your data flows from Data Wrangler in Studio Classic to Data Wrangler in SageMaker Canvas. As long as your Studio Classic and Canvas applications share the same Amazon EFS storage volume, you can migrate in one click from Canvas. This streamlined process eliminates the need for manual export and import steps, and you can import all of your flows at once.

Use the following procedure to migrate all of your flow files:

1. Open your latest version of Studio.

1. In Studio, in the left navigation pane, choose the **Data** dropdown menu.

1. From the navigation options, choose **Data Wrangler**.

1. On the **Data Wrangler** page, choose **Run in Canvas**. If you have successfully set up the permissions, this creates a Canvas application for you. The Canvas application may take a few minutes before it's ready. 

1. When Canvas is ready, choose **Open in Canvas.**

1. Canvas opens to the **Data Wrangler** page, and a banner at the top of the page appears that says Import your data flows from Data Wrangler in Studio Classic to Canvas. This is a one time import. Learn more. In the banner, choose **Import All**.
**Warning**  
If you close the banner notification, you won't be able to re-open it or use the one-click migration method anymore. 

A pop-up notification appears, indicating that Canvas is importing your flow files from Studio Classic. If the import is fully successful, you receive another notification that `X` number of flow files were imported, and you can see your flow files on the **Data Wrangler** page of the Canvas application. Any imported flow files that have the same name as existing data flows in your Canvas application are renamed with a postfix. You can open a data flow to verify that it looks as expected.

In case any of your flow files don't import successfully, you receive a notification that the import was either partially successful or failed. Choose **View errors** on the notification message to check the individual error messages for guidance on how to reformat any incorrectly formatted flow files.

After importing your flow files, you should now be able to continue using Data Wrangler to prepare data in SageMaker Canvas.

### Manual migration method
<a name="studio-updated-migrate-flows-manual"></a>

The following sections describe how to manually import your flow files into Canvas in case the one-click migration method didn't work.

#### Export the flow files from Studio Classic
<a name="studio-updated-migrate-flows-export"></a>

**Note**  
If you've already migrated your Studio Classic data to Amazon S3 by following the instructions in [(Optional) Migrate data from Studio Classic to Studio](#studio-updated-migrate-data), you can skip this step and go straight to the [Import the flow files into Canvas](#studio-updated-migrate-flows-import) section in which you import your flow files from the Amazon S3 location where your Studio Classic data is stored.

You can export your flow files by either saving them to Amazon S3 or downloading them to your local machine. When you import your flow files into SageMaker Canvas in the next step, if you choose the local upload option, then you can only upload 20 flow files at a time. If you have a large number of flow files to import, we recommend that you use Amazon S3 instead.

Follow the instructions in either [Method 1: Use Amazon S3 to transfer flow files](#studio-updated-migrate-flows-export-s3) or [Method 2: Use your local machine to transfer flow files](#studio-updated-migrate-flows-export-local) to proceed.

##### Method 1: Use Amazon S3 to transfer flow files
<a name="studio-updated-migrate-flows-export-s3"></a>

With this method, you use Amazon S3 as the intermediary between Data Wrangler in Studio Classic and Data Wrangler in SageMaker Canvas (accessed through the latest version of Studio). You export the flow files from Studio Classic to Amazon S3, and then in the next step, you access Canvas through Studio and import the flow files from Amazon S3.

Make sure that you have an Amazon S3 bucket prepared as the storage location for the flow files.

Use the following procedure to export your flow files from Studio Classic to Amazon S3:

1. Open Studio Classic.

1. Open a new terminal by doing the following:

   1. On the top navigation bar, choose **File**.

   1. In the context menu, hover over **New**, and then select **Terminal**.

1. By default, the terminal should open in your home directory. Navigate to the folder that contains all of the flow files that you want to migrate.

1. Use the following command to synchronize all of the flow files to the specified Amazon S3 location. Replace `{bucket-name}` and `{folder}` with the path to your desired Amazon S3 location. For more information about the command and parameters, see the [sync](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) command in the AWS AWS CLI Command Reference.

   ```
   aws s3 sync . s3://{bucket-name}/{folder}/ --exclude "*.*" --include "*.flow"
   ```

   If you are using your own AWS KMS key, then use the following command instead to synchronize the files, and specify your KMS key ID. Make sure that the user's IAM execution role (which should be the same role used in **Step 1. Migrate the domain and grant permissions** of the preceding [Prerequisites](#studio-updated-migrate-flows-prereqs)) has been granted access to use the KMS key.

   ```
   aws s3 sync . s3://{bucket-name}/{folder}/ --exclude "*.*" --include "*.flow" --sse-kms-key-id {your-key-id}
   ```

Your flow files should now be exported. You can check your Amazon S3 bucket to make sure that the flow files synchronized successfully.

To import these files in the latest version of Data Wrangler, follow the steps in [Import the flow files into Canvas](#studio-updated-migrate-flows-import).

##### Method 2: Use your local machine to transfer flow files
<a name="studio-updated-migrate-flows-export-local"></a>

With this method, you download the flow files from Studio Classic to your local machine. You can download the files directly, or you can compress them as a zip archive. Then, you unpack the zip file locally (if applicable), sign in to Canvas, and import the flow files by uploading them from your local machine.

Use the following procedure to download your flow files from Studio Classic:

1. Open Studio Classic.

1. (Optional) If you want to compress multiple flow files into a zip archive and download them all at once, then do the following:

   1. On the top navigation bar of Studio Classic, choose **File**.

   1. In the context menu, hover over **New**, and then select **Terminal**.

   1. By default, the terminal opens in your home directory. Navigate to the folder that contains all of the flow files that you want to migrate.

   1. Use the following command to pack the flow files in the current directory as a zip. The command excludes any hidden files:

      ```
      find . -not -path "*/.*" -name "*.flow" -print0 | xargs -0 zip my_archive.zip
      ```

1. Download the zip archive or individual flow files to your local machine by doing the following:

   1. In the left navigation pane of Studio Classic, choose **File Browser**.

   1. Find the file you want to download in the file browser.

   1. Right click the file, and in the context menu, select **Download**.

The file should download to your local machine. If you packed them as a zip archive, extract the files locally. After the files are extracted, to import these files in the latest version of Data Wrangler, follow the steps in [Import the flow files into Canvas](#studio-updated-migrate-flows-import). 

#### Import the flow files into Canvas
<a name="studio-updated-migrate-flows-import"></a>

After exporting your flow files, access Canvas through Studio and import the files.

Use the following procedure to import flow files into Canvas:

1. Open your latest version of Studio.

1. In Studio, in the left navigation pane, choose the **Data** dropdown menu.

1. From the navigation options, choose **Data Wrangler**.

1. On the **Data Wrangler** page, choose **Run in Canvas**. If you have successfully set up the permissions, this creates a Canvas application for you. The Canvas application may take a few minutes before it's ready. 

1. When Canvas is ready, choose **Open in Canvas.**

1. Canvas opens to the **Data Wrangler** page. In the top pane, choose **Import data flows**.

1. For **Data source**, choose either **Amazon S3** or **Local upload**.

1. Select your flow files from your Amazon S3 bucket, or upload the files from your local machine.
**Note**  
For local upload, you can upload a maximum of 20 flow files at a time. For larger imports, use Amazon S3. If you select a folder to import, any flow files in sub-folders are also imported.

1. Choose **Import data**.

If the import was successful, you receive a notification that `X` number of flow files were successfully imported.

In case your flow files don't import successfully, you receive a notification in the SageMaker Canvas application. Choose **View errors** on the notification message to check the individual error messages for guidance on how to reformat any incorrectly formatted flow files.

After your flow files are done importing, go to the **Data Wrangler** page of the SageMaker Canvas application to view your data flows. You can try opening a data flow to verify that it looks as expected.

# Amazon SageMaker Studio Classic
<a name="studio"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic is a web-based integrated development environment (IDE) for machine learning (ML). Studio Classic lets you build, train, debug, deploy, and monitor your ML models. Studio Classic includes all of the tools you need to take your models from data preparation to experimentation to production with increased productivity. In a single visual interface, you can do the following tasks:
+ Write and run code in Jupyter notebooks
+ Prepare data for machine learning
+ Build and train ML models
+ Deploy the models and monitor the performance of their predictions
+ Track and debug ML experiments
+ Collaborate with other users in real time

For information on the onboarding steps for Studio Classic, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

For information about collaborating with other users in real time, see [Collaboration with shared spaces](domain-space.md).

For the AWS Regions supported by Studio Classic, see [Supported Regions and Quotas](regions-quotas.md).

## Amazon SageMaker Studio Classic maintenance phase plan
<a name="studio-deprecation"></a>

The following table gives information about the timeline for when Amazon SageMaker Studio Classic entered its extended maintenance phase.




| Date | Description | 
| --- | --- | 
|  12/31/2024  |  Starting December 31st, Studio Classic reaches end of maintenance. At this point, Studio Classic will no longer receive updates and security fixes. All new domains will be created with Amazon SageMaker Studio as the default.  | 
|  1/31/2025  |  Starting January 31st, users will no longer be able to create new JupyterLab 3 notebooks in Studio Classic. Users will also not be able to restart or update existing notebooks. Users will be able to access existing Studio Classic applications from Studio only to delete or stop existing notebooks.  | 

**Note**  
Your existing Studio Classic domain is not automatically migrated to Studio. For information about migrating, see [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md).

**Topics**
+ [Amazon SageMaker Studio Classic maintenance phase plan](#studio-deprecation)
+ [Amazon SageMaker Studio Classic Features](#studio-features)
+ [Amazon SageMaker Studio Classic UI Overview](studio-ui.md)
+ [Launch Amazon SageMaker Studio Classic](studio-launch.md)
+ [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md)
+ [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md)
+ [Use Amazon SageMaker Studio Classic Notebooks](notebooks.md)
+ [Customize Amazon SageMaker Studio Classic](studio-customize.md)
+ [Perform Common Tasks in Amazon SageMaker Studio Classic](studio-tasks.md)
+ [Amazon SageMaker Studio Classic Pricing](studio-pricing.md)
+ [Troubleshooting Amazon SageMaker Studio Classic](studio-troubleshooting.md)

## Amazon SageMaker Studio Classic Features
<a name="studio-features"></a>

Studio Classic includes the following features:
+ [SageMaker Autopilot](https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html)
+ [SageMaker Clarify](https://docs.aws.amazon.com/sagemaker/latest/dg/model-explainability.html)
+ [SageMaker Data Wrangler](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler.html)
+ [SageMaker Debugger](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-on-studio.html)
+ [SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html)
+ [SageMaker Feature Store](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store-use-with-studio.html)
+ [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)
+ [Amazon SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-studio.html)
+ [SageMaker Model Registry](https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html)
+ [SageMaker Projects](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects.html)
+ [SageMaker Studio Classic Notebooks](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks.html)
+ [SageMaker Studio Universal Notebook](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-emr-cluster.html)

# Amazon SageMaker Studio Classic UI Overview
<a name="studio-ui"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic extends the capabilities of JupyterLab with custom resources that can speed up your Machine Learning (ML) process by harnessing the power of AWS compute. Previous users of JupyterLab will notice the similarity of the user interface. The most prominent additions are detailed in the following sections. For an overview of the original JupyterLab interface, see [The JupyterLab Interface](https://jupyterlab.readthedocs.io/en/latest/user/interface.html). 

The following image shows the default view upon launching Amazon SageMaker Studio Classic. The *left navigation* panel displays all top-level categories of features, and a *[Amazon SageMaker Studio Classic home page](#studio-ui-home)* is open in the *main working area*. Come back to this central point of orientation by choosing the **Home** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/house.png)) at any time, then selecting the **Home** node in the navigation menu.

Try the **Getting started notebook** for an in-product hands-on guide on how to set up and get familiar with Amazon SageMaker Studio Classic features. On the **Quick actions** section of the Studio Classic Home page, choose **Open the Getting started notebook**.

![\[SageMaker Studio Classic home page.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-home.png)


**Note**  
This chapter is based on Studio Classic's updated user interface (UI) available on version `v5.38.x` and above on JupyterLab3.  
To retrieve your version of Studio Classic UI, from the [Studio Classic Launcher](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-launcher.html), open a System Terminal, then  
Run `conda activate studio`
Run `jupyter labextension list`
Search for the version displayed after `@amzn/sagemaker-ui version` in the output.
For information about updating Amazon SageMaker Studio Classic, see [Shut Down and Update Amazon SageMaker Studio Classic](studio-tasks-update-studio.md).

**Topics**
+ [Amazon SageMaker Studio Classic home page](#studio-ui-home)
+ [Amazon SageMaker Studio Classic layout](#studio-ui-layout)

## Amazon SageMaker Studio Classic home page
<a name="studio-ui-home"></a>

The Home page provides access to common tasks and workflows. In particular, it includes a list of **Quick actions** for common tasks such as **Open Launcher** to create notebooks and other resources and **Import & prepare data visually** to create a new flow in Data Wrangler.The **Home** page also offers tooltips on key controls in the UI.

The **Prebuilt and automated solutions** help you get started quickly with SageMaker AI's low-code solutions such as Amazon SageMaker JumpStart and Autopilot.

In **Workflows and tasks**, you can find a list of relevant tasks for each step of your ML workflow that takes you to the right tool for the job. For example, **Transform, analyse, and export data** takes you to Amazon SageMaker Data Wrangler and opens the workflow to create a new data flow, or **View all experiments** takes you to SageMaker Experiments and opens the experiments list view.

Upon Studio Classic launch, the **Home** page is open in the main working area. You can customize your SageMaker AI **Home** page by choosing the **Customize Layout** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/layout.png)) at the top right of the **Home** tab.

## Amazon SageMaker Studio Classic layout
<a name="studio-ui-layout"></a>

The Amazon SageMaker Studio Classic interface consists of a *menu bar* at the top, a collapsible *left sidebar* displaying a variety of icons such as the **Home** icon and the **File Browser**, a *status bar* at the bottom of the screen, and a *central area* divided horizontally into two panes. The left pane is a collapsible *navigation panel*. The right pane, or main working area, contains one or more tabs for resources such as launchers, notebooks, terminals, metrics, and graphs, and can be further divided.

**Report a bug** in Studio Classic or choose the notification icon (![\[Red circle icon with white exclamation mark, indicating an alert or warning.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/icons/Notification.png)) to view notifications from Studio Classic, such as new Studio Classic versions and new SageMaker AI features, on the right corner of the menu bar. To update to a new version of Studio Classic, see [Shut Down and Update Amazon SageMaker Studio Classic and Apps](studio-tasks-update.md).

The following sections describe the Studio Classic main user interface areas.

### Left sidebar
<a name="studio-ui-nav-bar"></a>

The *left sidebar* includes the following icons. When hovering over an icon, a tooltip displays the icon name. A single click on an icon opens up the left navigation panel with the described functionality. A double click minimizes the left navigation panel.


| Icon | Description | 
| --- | --- | 
|  ![\[The Home icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/house@2x.png)  | **Home** Choose the **Home** icon to open a top-level navigation menu in the *left navigation* panel. Using the **Home** navigation menu, you can discover and navigate to the right tools for each step of your ML workflow. The menu also provides shortcuts to quick-start solutions and learning resources such as documentation and guided tutorials. The menu categories group relevant features together. Choosing **Data**, for example, expands the relevant SageMaker AI capabilities for your data preparations tasks. From here, you can prepare your data with Data Wrangler, create and store ML features with Amazon SageMaker Feature Store, and manage Amazon EMR clusters for large-scale data processing. The categories are ordered following a typical ML workflow from preparing data, to building, training, and deploying ML models (data, pipelines, models, and deployments). When you choose a specific node (such as Data Wrangler), a corresponding page opens in the main working area. Choose **Home** in the navigation menu to open the [Amazon SageMaker Studio Classic home page](#studio-ui-home) | 
|  ![\[The File Browser icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/folder@2x.png)  |  **File Browser** The **File Browser** displays lists of your notebooks, experiments, trials, trial components, endpoints, and low-code solutions. Whether you are in a personal or shared space determines who has access to your files. You can identify which type of space you are in by looking at the top right corner. If you are in a personal app, you see a user icon followed by *[user\$1name]*** / Personal Studio** and if you are in a collaborative space, you see a globe icon followed by "*[user\$1name] ***/** *[space\$1name].*" [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html) For hierarchical entries, a selectable breadcrumb at the top of the browser shows your location in the hierarchy.  | 
|  ![\[The Property Inspector icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/gears@2x.png)  |  **Property Inspector** The Property Inspector is a notebook cell tools inspector which displays contextual property settings when open.  | 
|  ![\[The Running Terminals and Kernels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/running-terminals-kernels@2x.png)  |  **Running Terminals and Kernels** You can check the list of all the *kernels* and *terminals* currently running across all notebooks, code consoles, and directories. You can shut down individual resources, including notebooks, terminals, kernels, apps, and instances. You can also shut down all resources in one of these categories at the same time. For more information, see [Shut Down Resources from Amazon SageMaker Studio Classic](notebooks-run-and-manage-shut-down.md).  | 
|  ![\[The Git icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/git@2x.png)  |  **Git** You can connect to a Git repository and then access a full range of Git tools and operations. For more information, see [Clone a Git Repository in Amazon SageMaker Studio Classic](studio-tasks-git.md).  | 
|  ![\[The Table of Contents icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/table-of-contents@2x.png)  | **Table of Contents**You can navigate the structure of a document when a notebook or Python files are open. A table of contents is auto-generated in the left navigation panel when you have a notebook, Markdown files, or Python files opened. The entries are clickable and scroll the document to the heading in question. | 
|  ![\[The Extensions icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/extensions@2x.png)  |  **Extensions** You can turn on and manage third-party JupyterLab extensions. You can check the already installed extensions and search for extensions by typing the name in the search bar. When you have found the extension you want to install, choose **Install**. After installing your new extensions, be sure to restart JupyterLab by refreshing your browser. For more information, see [JupyterLab Extensions documentation](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html).  | 

### Left navigation panel
<a name="studio-ui-browser"></a>

The left navigation panel content varies with the Icon selected in the left sidebar.

For example, choosing the **Home** icon displays the navigation menu. Choosing **File browser** lists all the files and directories available in your workspace (notebooks, experiments, data flows, trials, trial components, endpoints, or low-code solutions).

In the navigation menu, choosing a node brings up the corresponding feature page in the main working area. For example, choosing **Data Wrangler** in the **Data** menu opens up the **Data Wrangler** tab listing all existing flows.

### Main working area
<a name="studio-ui-work"></a>

The main working area consists of multiple tabs that contain your open notebooks, terminals, and detailed information about your experiments and endpoints. In the main working area, you can arrange documents (such as notebooks and text files) and other activities (such as terminals and code consoles) into panels of tabs that you can resize or subdivide. Drag a tab to the center of a tab panel to move the tab to the panel. Subdivide a tab panel by dragging a tab to the left, right, top, or bottom of the panel. The tab for the current activity is marked with a colored top border (blue by default).

**Note**  
All feature pages provide in-product contextual help. To access help, choose **Show information**. The help interface provides a brief introduction to the tool and links to additional resources, such as videos, tutorials, or blogs.

# Launch Amazon SageMaker Studio Classic
<a name="studio-launch"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

After you have onboarded to an Amazon SageMaker AI domain, you can launch an Amazon SageMaker Studio Classic application from either the SageMaker AI console or the AWS CLI. For more information about onboarding to a domain, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

**Topics**
+ [Launch Amazon SageMaker Studio Classic Using the Amazon SageMaker AI Console](#studio-launch-console)
+ [Launch Amazon SageMaker Studio Classic Using the AWS CLI](#studio-launch-cli)

## Launch Amazon SageMaker Studio Classic Using the Amazon SageMaker AI Console
<a name="studio-launch-console"></a>

The process to navigate to Studio Classic from the Amazon SageMaker AI Console differs depending on if Studio Classic or Amazon SageMaker Studio are set as the default experience for your domain. For more information about setting the default experience for your domain, see [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md).

**Topics**
+ [Prerequisite](#studio-launch-console-prerequisites)

### Prerequisite
<a name="studio-launch-console-prerequisites"></a>

 To complete this procedure, you must onboard to a domain by following the steps in [Onboard to Amazon SageMaker AI domain](https://docs.aws.amazon.com//sagemaker/latest/dg/gs-studio-onboard.html). 

### Launch Studio Classic if Studio is your default experience
<a name="studio-launch-console-updated"></a>

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **Studio Classic**.

1. From the Studio Classic landing page, select the Studio Classic instance to open.

1. Choose “Open”.

### Launch Studio Classic if Studio Classic is your default experience
<a name="studio-launch-console-classic"></a>

When Studio Classic is your default experience, you can launch a Amazon SageMaker Studio Classic application from the SageMaker AI console using the Studio Classic landing page or the Amazon SageMaker AI domain details page. The following sections demonstrate how to launch the Studio Classic application from the SageMaker AI console.

#### Launch Studio Classic from the domain details page
<a name="studio-launch-console-details"></a>

The following sections describe how to launch a Studio Classic application from the domain details page. The steps to launch the Studio Classic application after you have navigated to the domain details page differ depending on if you’re launching a personal application or a shared space. 

 **Navigate to the domain details page** 

 The following procedure shows how to navigate to the domain details page. 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the list of domain, select the domain that you want to launch the Studio Classic application in.

 **Launch a user profile app** 

 The following procedure shows how to launch a Studio Classic application that is scoped to a user profile. 

1.  On the domain details page, choose the **User profiles** tab. 

1.  Identify the user profile that you want to launch the Studio Classic application for. 

1.  Choose **Launch** for your selected user profile, then choose **Studio Classic**. 

 **Launch a shared space app** 

 The following procedure shows how to launch a Studio Classic application that is scoped to a shared space. 

1.  On the domain details page, choose the **Space management** tab. 

1.  Identify the shared space that you want to launch the Studio Classic application for. 

1.  Choose **Launch Studio Classic** for your selected shared space. 

#### Launch Studio Classic from the Studio Classic landing page
<a name="studio-launch-console-landing"></a>

 The following procedure describes how to launch a Studio Classic application from the Studio Classic landing page. 

 **Launch Studio Classic** 

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose Studio Classic.

1.  Under **Get started**, select the domain that you want to launch the Studio Classic application in. If your user profile only belongs to one domain, you do not see the option for selecting a domain.

1.  Select the user profile that you want to launch the Studio Classic application for. If there is no user profile in the domain, choose **Create user profile**. For more information, see [Add user profiles](domain-user-profile-add.md).

1.  Choose **Launch Studio Classic**. If the user profile belongs to a shared space, choose **Open Spaces**. 

1.  To launch a Studio Classic application scoped to a user profile, choose **Launch personal Studio Classic**. 

1.  To launch a shared Studio Classic application, choose the **Launch shared Studio Classic** button next to the shared space that you want to launch into. 

## Launch Amazon SageMaker Studio Classic Using the AWS CLI
<a name="studio-launch-cli"></a>

You can use the AWS Command Line Interface (AWS CLI) to launch Amazon SageMaker Studio Classic by creating a presigned domain URL.

 **Prerequisites** 

 Before you begin, complete the following prerequisites: 
+  Onboard to Amazon SageMaker AI domain. For more information, see [Onboard to Amazon SageMaker AI domain](https://docs.aws.amazon.com//sagemaker/latest/dg/gs-studio-onboard.html). 
+  Update the AWS CLI by following the steps in [Installing the current AWS CLI Version](https://docs.aws.amazon.com//cli/latest/userguide/install-cliv1.html#install-tool-bundled). 
+  From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com//general/latest/gr/aws-sec-cred-types.html). 

The following code snippet demonstrates how to launch Amazon SageMaker Studio Classic from the AWS CLI using a presigned domain URL. For more information, see [create-presigned-domain-url](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-presigned-domain-url.html).

```
aws sagemaker create-presigned-domain-url \
--region region \
--domain-id domain-id \
--space-name space-name \
--user-profile-name user-profile-name \
--session-expiration-duration-in-seconds 43200
```

# JupyterLab Versioning in Amazon SageMaker Studio Classic
<a name="studio-jl"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The Amazon SageMaker Studio Classic interface is based on JupyterLab, which is a web-based interactive development environment for notebooks, code, and data. Studio Classic only supports using JupyterLab 3.

If you created your domain and user profile using the AWS Management Console before 08/31/2022 or using the AWS Command Line Interface before 02/22/23, then your Studio Classic instance defaulted to JupyterLab 1. After 07/01/2024, you cannot create any Studio Classic applications that run JupyterLab 1.

## JupyterLab 3
<a name="jl3"></a>

JupyterLab 3 includes the following features that are not available in previous versions. For more information about these features, see [JupyterLab 3.0 is released\$1](https://blog.jupyter.org/jupyterlab-3-0-is-out-4f58385e25bb). 
+ Visual debugger when using the Base Python 2.0 and Data Science 2.0 kernels.
+ File browser filter 
+ Table of Contents (TOC) 
+ Multi-language support 
+ Simple mode 
+ Single interface mode 

### Important changes to JupyterLab 3
<a name="jl3-changes"></a>

 Consider the following when using JupyterLab 3: 
+ When setting the JupyterLab version using the AWS CLI, select the corresponding image for your Region and JupyterLab version from the image list in [From the AWS CLI](#studio-jl-set-cli).
+ In JupyterLab 3, you must activate the `studio` conda environment before installing extensions. For more information, see [Installing JupyterLab and Jupyter Server extensions](#studio-jl-install).
+ Debugger is only supported when using the following images: 
  + Base Python 2.0
  + Data Science 2.0
  + Base Python 3.0
  + Data Science 3.0

## Restricting default JupyterLab version using an IAM policy condition key
<a name="iam-policy"></a>

You can use IAM policy condition keys to restrict the version of JupyterLab that your users can launch.

The following policy shows how to limit the JupyterLab version at the domain level. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "BlockJupyterLab3DomainLevelAppCreation",
            "Effect": "Deny",
            "Action": [
                "sagemaker:CreateDomain",
                "sagemaker:UpdateDomain"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:ArnLike": {
                    "sagemaker:ImageArns": "arn:aws:sagemaker:us-east-1:111122223333:image/jupyter-server-3"
                }
            }
        }
    ]
}
```

------

The following policy shows how to limit the JupyterLab version at the user profile level. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "BlockUsersFromCreatingJupyterLab3Apps",
            "Effect": "Deny",
            "Action": [
                "sagemaker:CreateUserProfile",
                "sagemaker:UpdateUserProfile"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:ArnLike": {
                    "sagemaker:ImageArns": "arn:aws:sagemaker:us-east-1:111122223333:image/jupyter-server-3"
                }
            }
        }
    ]
}
```

------

The following policy shows how to limit the JupyterLab version at the application level. The `CreateApp` request must include the image ARN for this policy to apply.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "BlockJupyterLab3AppLevelAppCreation",
            "Effect": "Deny",
            "Action": "sagemaker:CreateApp",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:ArnLike": {
                    "sagemaker:ImageArns": "arn:aws:sagemaker:us-east-1:111122223333:image/jupyter-server-3"
                }
            }
        }
    ]
}
```

------

## Setting a default JupyterLab version
<a name="studio-jl-set"></a>

The following sections show how to set a default JupyterLab version for Studio Classic using either the console or the AWS CLI.  

### From the console
<a name="studio-jl-set-console"></a>

 You can select the default JupyterLab version to use on either the domain or user profile level during resource creation. To set the default JupyterLab version using the console, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).  

### From the AWS CLI
<a name="studio-jl-set-cli"></a>

 You can select the default JupyterLab version to use on either the domain or user profile level using the AWS CLI.  

 To set the default JupyterLab version using the AWS CLI, you must include the ARN of the desired default JupyterLab version as part of an AWS CLI command. This ARN differs based on the version and the Region of the SageMaker AI domain.  

The following table lists the ARNs of the available JupyterLab versions for each Region:


|  Region  |  JL3  | 
| --- | --- | 
|  us-east-1  |  arn:aws:sagemaker:us-east-1:081325390199:image/jupyter-server-3  | 
|  us-east-2  |  arn:aws:sagemaker:us-east-2:429704687514:image/jupyter-server-3  | 
|  us-west-1  |  arn:aws:sagemaker:us-west-1:742091327244:image/jupyter-server-3  | 
|  us-west-2  |  arn:aws:sagemaker:us-west-2:236514542706:image/jupyter-server-3  | 
|  af-south-1  |  arn:aws:sagemaker:af-south-1:559312083959:image/jupyter-server-3  | 
|  ap-east-1  |  arn:aws:sagemaker:ap-east-1:493642496378:image/jupyter-server-3  | 
|  ap-south-1  |  arn:aws:sagemaker:ap-south-1:394103062818:image/jupyter-server-3  | 
|  ap-northeast-2  |  arn:aws:sagemaker:ap-northeast-2:806072073708:image/jupyter-server-3  | 
|  ap-southeast-1  |  arn:aws:sagemaker:ap-southeast-1:492261229750:image/jupyter-server-3  | 
|  ap-southeast-2  |  arn:aws:sagemaker:ap-southeast-2:452832661640:image/jupyter-server-3  | 
|  ap-northeast-1  |  arn:aws:sagemaker:ap-northeast-1:102112518831:image/jupyter-server-3  | 
|  ca-central-1  |  arn:aws:sagemaker:ca-central-1:310906938811:image/jupyter-server-3  | 
|  eu-central-1  |  arn:aws:sagemaker:eu-central-1:936697816551:image/jupyter-server-3  | 
|  eu-west-1  |  arn:aws:sagemaker:eu-west-1:470317259841:image/jupyter-server-3  | 
|  eu-west-2  |  arn:aws:sagemaker:eu-west-2:712779665605:image/jupyter-server-3  | 
|  eu-west-3  |  arn:aws:sagemaker:eu-west-3:615547856133:image/jupyter-server-3  | 
|  eu-north-1  |  arn:aws:sagemaker:eu-north-1:243637512696:image/jupyter-server-3  | 
|  eu-south-1  |  arn:aws:sagemaker:eu-south-1:592751261982:image/jupyter-server-3  | 
|  eu-south-2  |  arn:aws:sagemaker:eu-south-2:127363102723:image/jupyter-server-3  | 
|  sa-east-1  |  arn:aws:sagemaker:sa-east-1:782484402741:image/jupyter-server-3  | 
|  cn-north-1  |  arn:aws-cn:sagemaker:cn-north-1:390048526115:image/jupyter-server-3  | 
|  cn-northwest-1  |  arn:aws-cn:sagemaker:cn-northwest-1:390780980154:image/jupyter-server-3  | 

#### Create or update domain
<a name="studio-jl-set-cli-domain"></a>

 You can set a default JupyterServer version at the domain level by invoking [CreateDomain](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateDomain.html) or [UpdateDomain](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateDomain.html) and passing the `UserSettings.JupyterServerAppSettings.DefaultResourceSpec.SageMakerImageArn` field. 

 The following shows how to create a domain with JupyterLab 3 as the default, using the AWS CLI: 

```
aws --region <REGION> \
sagemaker create-domain \
--domain-name <NEW_DOMAIN_NAME> \
--auth-mode <AUTHENTICATION_MODE> \
--subnet-ids <SUBNET-IDS> \
--vpc-id <VPC-ID> \
--default-user-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:image/jupyter-server-3",
      "InstanceType": "system"
    }
  }
}'
```

 The following shows how to update a domain to use JupyterLab 3 as the default, using the AWS CLI: 

```
aws --region <REGION> \
sagemaker update-domain \
--domain-id <YOUR_DOMAIN_ID> \
--default-user-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:image/jupyter-server-3",
      "InstanceType": "system"
    }
  }
}'
```

#### Create or update user profile
<a name="studio-jl-set-cli-user"></a>

 You can set a default JupyterServer version at the user profile level by invoking [CreateUserProfile](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateUserProfile.html) or [UpdateUserProfile](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateUserProfile.html) and passing the `UserSettings.JupyterServerAppSettings.DefaultResourceSpec.SageMakerImageArn` field. 

 The following shows how to create a user profile with JupyterLab 3 as the default on an existing domain, using the AWS CLI: 

```
aws --region <REGION> \
sagemaker create-user-profile \
--domain-id <YOUR_DOMAIN_ID> \
--user-profile-name <NEW_USERPROFILE_NAME> \
--query UserProfileArn --output text \
--user-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:image/jupyter-server-3",
      "InstanceType": "system"
    }
  }
}'
```

 The following shows how to update a user profile to use JupyterLab 3 as the default, using the AWS CLI: 

```
aws --region <REGION> \
sagemaker update-user-profile \
 --domain-id <YOUR_DOMAIN_ID> \
 --user-profile-name <EXISTING_USERPROFILE_NAME> \
--user-settings '{
  "JupyterServerAppSettings": {
    "DefaultResourceSpec": {
      "SageMakerImageArn": "arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:image/jupyter-server-3",
      "InstanceType": "system"
    }
  }
}'
```

## View and update the JupyterLab version of an application from the console
<a name="studio-jl-view"></a>

 The following shows how to view and update the JupyterLab version of an application. 

1.  Navigate to the SageMaker AI **domains** page. 

1.  Select a domain to view its user profiles. 

1.  Select a user to view their applications. 

1.  To view the JupyterLab version of an application, select the application's name. 

1.  To update the JupyterLab version, select **Action**. 

1.  From the dropdown menu, select **Change JupyterLab version**. 

1.  From the **Studio Classic settings** page, select the JupyterLab version from the dropdown menu. 

1.  After the JupyterLab version for the user profile has been successfully updated, restart the JupyterServer application to make the version changes effective. For more information about restarting a JupyterServer application, see [Shut Down and Update Amazon SageMaker Studio Classic](studio-tasks-update-studio.md).

## Installing JupyterLab and Jupyter Server extensions
<a name="studio-jl-install"></a>

In JupyterLab 3, you must activate the `studio` conda environment before installing extensions. The method for this differs if you're installing the extensions from within Studio Classic or using a lifecycle configuration script.

### Installing Extension from within Studio Classic
<a name="studio-jl-install-studio"></a>

To install extensions from within Studio Classic, you must activate the `studio` environment before you install extensions. 

```
# Before installing extensions
conda activate studio

# Install your extensions
pip install <JUPYTER_EXTENSION>

# After installing extensions
conda deactivate
```

### Installing Extensions using a lifecycle configuration script
<a name="studio-jl-install-lcc"></a>

If you're installing JupyterLab and Jupyter Server extensions in your lifecycle configuration script, you must modify your script so that it works with JupyterLab 3. The following sections show the code needed for existing and new lifecycle configuration scripts.

#### Existing lifecycle configuration script
<a name="studio-jl-install-lcc-existing"></a>

If you're reusing an existing lifecycle configuration script that must work with both versions of JupyterLab, use the following code in your script:

```
# Before installing extension
export AWS_SAGEMAKER_JUPYTERSERVER_IMAGE="${AWS_SAGEMAKER_JUPYTERSERVER_IMAGE:-'jupyter-server'}"
if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ] ; then
   eval "$(conda shell.bash hook)"
   conda activate studio
fi;

# Install your extensions
pip install <JUPYTER_EXTENSION>


# After installing extension
if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ]; then
   conda deactivate
fi;
```

#### New lifecycle configuration script
<a name="studio-jl-install-lcc-new"></a>

If you're writing a new lifecycle configuration script that only uses JupyterLab 3, you can use the following code in your script:

```
# Before installing extension
eval "$(conda shell.bash hook)"
conda activate studio


# Install your extensions
pip install <JUPYTER_EXTENSION>


conda deactivate
```

# Use the Amazon SageMaker Studio Classic Launcher
<a name="studio-launcher"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can use the Amazon SageMaker Studio Classic Launcher to create notebooks and text files, and to launch terminals and interactive Python shells.

You can open Studio Classic Launcher in any of the following ways:
+ Choose **Amazon SageMaker Studio Classic** at the top left of the Studio Classic interface.
+ Use the keyboard shortcut `Ctrl + Shift + L`.
+ From the Studio Classic menu, choose **File** and then choose **New Launcher**.
+ If the SageMaker AI file browser is open, choose the plus (**\$1**) sign in the Studio Classic file browser menu.
+ In the **Quick actions** section of the **Home** tab, choose **Open Launcher**. The Launcher opens in a new tab. The **Quick actions** section is visible by default but can be toggled off. Choose **Customize Layout** to turn this section back on.

![\[SageMaker Studio Classic launcher.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-new-launcher.png)


The Launcher consists of the following two sections:

**Topics**
+ [Notebooks and compute resources](#studio-launcher-launch)
+ [Utilities and files](#studio-launcher-other)

## Notebooks and compute resources
<a name="studio-launcher-launch"></a>

In this section, you can create a notebook, open an image terminal, or open a Python console.

To create or launch one of those items:

1. Choose **Change environment** to select a SageMaker image, a kernel, an instance type, and, optionally, add a lifecycle configuration script that runs on image start-up. For more information on lifecycle configuration scripts, see [Use Lifecycle Configurations to Customize Amazon SageMaker Studio Classic](studio-lcc.md). For more information about kernel updates, see [Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-change-image.md).

1. Select an item.

**Note**  
When you choose an item from this section, you might incur additional usage charges. For more information, see [Usage Metering for Amazon SageMaker Studio Classic Notebooks](notebooks-usage-metering.md).

The following items are available:
+ **Notebook**

  Launches the notebook in a kernel session on the chosen SageMaker image.

  Creates the notebook in the folder that you have currently selected in the file browser. To view the file browser, in the left sidebar of Studio Classic, choose the **File Browser** icon.
+ **Console**

  Launches the shell in a kernel session on the chosen SageMaker image.

  Opens the shell in the folder that you have currently selected in the file browser.
+ **Image terminal**

  Launches the terminal in a terminal session on the chosen SageMaker image.

  Opens the terminal in the root folder for the user (as shown by the **Home** folder in the file browser).

**Note**  
By default, CPU instances launch on a `ml.t3.medium` instance, while GPU instances launch on a `ml.g4dn.xlarge` instance.

## Utilities and files
<a name="studio-launcher-other"></a>

In this section, you can add contextual help in a notebook; create Python, Markdown and text files; and open a system terminal.

**Note**  
Items in this section run in the context of Amazon SageMaker Studio Classic and don't incur usage charges.

The following items are available:
+ **Show Contextual Help**

  Opens a new tab that displays contextual help for functions in a Studio Classic notebook. To display the help, choose a function in an active notebook. To make it easier to see the help in context, drag the help tab so that it's adjacent to the notebook tab. To open the help tab from within a notebook, press `Ctrl + I`.

  The following screenshot shows the contextual help for the `Experiment.create` method.  
![\[SageMaker Studio Classic contextual help.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-context-help.png)
+ **System terminal**

  Opens a `bash` shell in the root folder for the user (as shown by the **Home** folder in the file browser).
+ **Text File** and **Markdown File**

  Creates a file of the associated type in the folder that you have currently selected in the file browser. To view the file browser, in the left sidebar, choose the **File Browser** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/folder.png)).

# Use Amazon SageMaker Studio Classic Notebooks
<a name="notebooks"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic notebooks are collaborative notebooks that you can launch quickly because you don't need to set up compute instances and file storage beforehand. Studio Classic notebooks provide persistent storage, which enables you to view and share notebooks even if the instances that the notebooks run on are shut down.

You can share your notebooks with others, so that they can easily reproduce your results and collaborate while building models and exploring your data. You provide access to a read-only copy of the notebook through a secure URL. Dependencies for your notebook are included in the notebook's metadata. When your colleagues copy the notebook, it opens in the same environment as the original notebook.

A Studio Classic notebook runs in an environment defined by the following:
+ Amazon EC2 instance type – The hardware configuration the notebook runs on. The configuration includes the number and type of processors (vCPU and GPU), and the amount and type of memory. The instance type determines the pricing rate.
+ SageMaker image – A container image that is compatible with SageMaker Studio Classic. The image consists of the kernels, language packages, and other files required to run a notebook in Studio Classic. There can be multiple images in an instance. For more information, see [Custom Images in Amazon SageMaker Studio Classic](studio-byoi.md).
+ KernelGateway app – A SageMaker image runs as a KernelGateway app. The app provides access to the kernels in the image. There is a one-to-one correspondence between a SageMaker AI image and a KernelGateway app.
+ Kernel – The process that inspects and runs the code contained in the notebook. A kernel is defined by a *kernel spec* in the image. There can be multiple kernels in an image.

You can change any of these resources from within the notebook.

The following diagram outlines how a notebook kernel runs in relation to the KernelGateway App, User, and domain.

![\[How a notebook kernel runs in relation to the KernelGateway App, User, and domain.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-components.png)


Sample SageMaker Studio Classic notebooks are available in the [aws\$1sagemaker\$1studio](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_sagemaker_studio) folder of the [Amazon SageMaker example GitHub repository](https://github.com/awslabs/amazon-sagemaker-examples). Each notebook comes with the necessary SageMaker image that opens the notebook with the appropriate kernel.

We recommend that you familiarize yourself with the SageMaker Studio Classic interface and the Studio Classic notebook toolbar before creating or using a Studio Classic notebook. For more information, see [Amazon SageMaker Studio Classic UI Overview](studio-ui.md) and [Use the Studio Classic Notebook Toolbar](notebooks-menu.md).

**Topics**
+ [How Are Amazon SageMaker Studio Classic Notebooks Different from Notebook Instances?](notebooks-comparison.md)
+ [Get Started with Amazon SageMaker Studio Classic Notebooks](notebooks-get-started.md)
+ [Amazon SageMaker Studio Classic Tour](gs-studio-end-to-end.md)
+ [Create or Open an Amazon SageMaker Studio Classic Notebook](notebooks-create-open.md)
+ [Use the Studio Classic Notebook Toolbar](notebooks-menu.md)
+ [Install External Libraries and Kernels in Amazon SageMaker Studio Classic](studio-notebooks-add-external.md)
+ [Share and Use an Amazon SageMaker Studio Classic Notebook](notebooks-sharing.md)
+ [Get Amazon SageMaker Studio Classic Notebook and App Metadata](notebooks-run-and-manage-metadata.md)
+ [Get Notebook Differences in Amazon SageMaker Studio Classic](notebooks-diff.md)
+ [Manage Resources for Amazon SageMaker Studio Classic Notebooks](notebooks-run-and-manage.md)
+ [Usage Metering for Amazon SageMaker Studio Classic Notebooks](notebooks-usage-metering.md)
+ [Available Resources for Amazon SageMaker Studio Classic Notebooks](notebooks-resources.md)

# How Are Amazon SageMaker Studio Classic Notebooks Different from Notebook Instances?
<a name="notebooks-comparison"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

When you're starting a new notebook, we recommend that you create the notebook in Amazon SageMaker Studio Classic instead of launching a notebook instance from the Amazon SageMaker AI console. There are many benefits to using a Studio Classic notebook, including the following:
+ **Faster: **Starting a Studio Classic notebook is faster than launching an instance-based notebook. Typically, it is 5-10 times faster than instance-based notebooks.
+ **Easy notebook sharing: **Notebook sharing is an integrated feature in Studio Classic. Users can generate a shareable link that reproduces the notebook code and also the SageMaker image required to execute it, in just a few clicks.
+ **Latest Python SDK: **Studio Classic notebooks come pre-installed with the latest [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable).
+ **Access all Studio Classic features: **Studio Classic notebooks are accessed from within Studio Classic. This enables you to build, train, debug, track, and monitor your models without leaving Studio Classic.
+ **Persistent user directories:** Each member of a Studio team gets their own home directory to store their notebooks and other files. The directory is automatically mounted onto all instances and kernels as they're started, so their notebooks and other files are always available. The home directories are stored in Amazon Elastic File System (Amazon EFS) so that you can access them from other services.
+ **Direct access:** When using IAM Identity Center, you use your IAM Identity Center credentials through a unique URL to directly access Studio Classic. You don't have to interact with the AWS Management Console to run your notebooks.
+ **Optimized images:** Studio Classic notebooks are equipped with a set of predefined SageMaker image settings to get you started faster.

**Note**  
Studio Classic notebooks don't support *local mode*. However, you can use a notebook instance to train a sample of your dataset locally, and then use the same code in a Studio Classic notebook to train on the full dataset.

When you open a notebook in SageMaker Studio Classic, the view is an extension of the JupyterLab interface. The primary features are the same, so you'll find the typical features of a Jupyter notebook and JupyterLab. For more information about the Studio Classic interface, see [Amazon SageMaker Studio Classic UI Overview](studio-ui.md).

# Get Started with Amazon SageMaker Studio Classic Notebooks
<a name="notebooks-get-started"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

To get started, you or your organization's administrator need to complete the SageMaker AI domain onboarding process. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

You can access a Studio Classic notebook in any of the following ways:
+ You receive an email invitation to access Studio Classic through your organization's IAM Identity Center, which includes a direct link to login to Studio Classic without having to use the Amazon SageMaker AI console. You can proceed to the [Next Steps](#notebooks-get-started-next-steps).
+ You receive a link to a shared Studio Classic notebook, which includes a direct link to log in to Studio Classic without having to use the SageMaker AI console. You can proceed to the [Next Steps](#notebooks-get-started-next-steps). 
+ You onboard to a domain and then log in to the SageMaker AI console. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

## Launch Amazon SageMaker AI
<a name="notebooks-get-started-log-in"></a>

Complete the steps in [Launch Amazon SageMaker Studio Classic](studio-launch.md) to launch Studio Classic.

## Next Steps
<a name="notebooks-get-started-next-steps"></a>

Now that you're in Studio Classic, you can try any of the following options:
+ To create a Studio Classic notebook or explore Studio Classic end-to-end tutorial notebooks – See [Amazon SageMaker Studio Classic Tour](gs-studio-end-to-end.md) in the next section.
+ To familiarize yourself with the Studio Classic interface – See [Amazon SageMaker Studio Classic UI Overview](studio-ui.md) or try the **Getting started notebook** by selecting **Open the Getting started notebook** in the **Quick actions** section of the Studio Classic Home page.

# Amazon SageMaker Studio Classic Tour
<a name="gs-studio-end-to-end"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

For a walkthrough that takes you on a tour of the main features of Amazon SageMaker Studio Classic, see the [xgboost\$1customer\$1churn\$1studio.ipynb](https://sagemaker-examples.readthedocs.io/en/latest/aws_sagemaker_studio/getting_started/xgboost_customer_churn_studio.html) sample notebook from the [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) GitHub repository. The code in the notebook trains multiple models and sets up the SageMaker Debugger and SageMaker Model Monitor. The walkthrough shows you how to view the trials, compare the resulting models, show the debugger results, and deploy the best model using the Studio Classic UI. You don't need to understand the code to follow this walkthrough.

**Prerequisites**

To run the notebook for this tour, you need:
+ An IAM account to sign in to Studio. For information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).
+ Basic familiarity with the Studio user interface and Jupyter notebooks. For information, see [Amazon SageMaker Studio Classic UI Overview](studio-ui.md).
+ A copy of the [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) repository in your Studio environment.

**To clone the repository**

1. Launch Studio Classic following the steps in [Launch Amazon SageMaker Studio Classic](studio-launch.md) For users in IAM Identity Center, sign in using the URL from your invitation email.

1. On the top menu, choose **File**, then **New**, then **Terminal**.

1. At the command prompt, run the following command to clone the [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) GitHub repository.

   ```
   $ git clone https://github.com/aws/amazon-sagemaker-examples.git
   ```

**To navigate to the sample notebook**

1. From the **File Browser** on the left menu, select **amazon-sagemaker-examples**.

1. Navigate to the example notebook with the following path.

   `~/amazon-sagemaker-examples/aws_sagemaker_studio/getting_started/xgboost_customer_churn_studio.ipynb`

1. Follow the notebook to learn about Studio Classic's main features.

**Note**  
If you encounter an error when you run the sample notebook, and some time has passed from when you cloned the repository, review the notebook on the remote repository for updates.

# Create or Open an Amazon SageMaker Studio Classic Notebook
<a name="notebooks-create-open"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

When you [Create a Notebook from the File Menu](#notebooks-create-file-menu) in Amazon SageMaker Studio Classic or [Open a notebook in Studio Classic](#notebooks-open) for the first time, you are prompted to set up your environment by choosing a SageMaker image, a kernel, an instance type, and, optionally, a lifecycle configuration script that runs on image start-up. SageMaker AI launches the notebook on an instance of the chosen type. By default, the instance type is set to `ml.t3.medium` (available as part of the [AWS Free Tier](https://aws.amazon.com/free)) for CPU-based images. For GPU-based images, the default instance type is `ml.g4dn.xlarge`.

If you create or open additional notebooks that use the same instance type, whether or not the notebooks use the same kernel, the notebooks run on the same instance of that instance type.

After you launch a notebook, you can change its instance type, SageMaker image, and kernel from within the notebook. For more information, see [Change the Instance Type for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-switch-instance-type.md) and [Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-change-image.md).

**Note**  
You can have only one instance of each instance type. Each instance can have multiple SageMaker images running on it. Each SageMaker image can run multiple kernels or terminal instances. 

Billing occurs per instance and starts when the first instance of a given instance type is launched. If you want to create or open a notebook without the risk of incurring charges, open the notebook from the **File** menu and choose **No Kernel** from the **Select Kernel** dialog box. You can read and edit a notebook without a running kernel but you can't run cells.

Billing ends when the SageMaker image for the instance is shut down. For more information, see [Usage Metering for Amazon SageMaker Studio Classic Notebooks](notebooks-usage-metering.md).

For information about shutting down the notebook, see [Shut down resources](notebooks-run-and-manage-shut-down.md#notebooks-run-and-manage-shut-down-sessions).

**Topics**
+ [Open a notebook in Studio Classic](#notebooks-open)
+ [Create a Notebook from the File Menu](#notebooks-create-file-menu)
+ [Create a Notebook from the Launcher](#notebooks-create-launcher)
+ [List of the available instance types, images, and kernels](#notebooks-instance-image-kernels)

## Open a notebook in Studio Classic
<a name="notebooks-open"></a>

Amazon SageMaker Studio Classic can only open notebooks listed in the Studio Classic file browser. For instructions on uploading a notebook to the file browser, see [Upload Files to Amazon SageMaker Studio Classic](studio-tasks-files.md) or [Clone a Git Repository in Amazon SageMaker Studio Classic](studio-tasks-git.md).

**To open a notebook**

1. In the left sidebar, choose the **File Browser** icon ( ![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/folder.png)) to display the file browser.

1. Browse to a notebook file and double-click it to open the notebook in a new tab.

## Create a Notebook from the File Menu
<a name="notebooks-create-file-menu"></a>

**To create a notebook from the File menu**

1. From the Studio Classic menu, choose **File**, choose **New**, and then choose **Notebook**.

1. In the **Change environment** dialog box, use the dropdown menus to select your **Image**, **Kernel**, **Instance type**, and **Start-up script**, then choose **Select**. Your notebook launches and opens in a new Studio Classic tab.  
![\[Studio Classic notebook environment setup.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-environment-setup.png)

## Create a Notebook from the Launcher
<a name="notebooks-create-launcher"></a>

**To create a notebook from the Launcher**

1. To open the Launcher, choose **Amazon SageMaker Studio Classic** at the top left of the Studio Classic interface or use the keyboard shortcut `Ctrl + Shift + L`.

   To learn about all the available ways to open the Launcher, see [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md)

1. In the Launcher, in the **Notebooks and compute resources** section, choose **Change environment**.  
![\[SageMaker Studio Classic set notebook environment.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-launcher-notebook-creation.png)

1. In the **Change environment** dialog box, use the dropdown menus to select your **Image**, **Kernel**, **Instance type**, and **Start-up script**, then choose **Select**.

1. In the Launcher, choose **Create notebook**. Your notebook launches and opens in a new Studio Classic tab.

To view the notebook's kernel session, in the left sidebar, choose the **Running Terminals and Kernels** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/running-terminals-kernels.png)). You can stop the notebook's kernel session from this view.

## List of the available instance types, images, and kernels
<a name="notebooks-instance-image-kernels"></a>

For a list of all available resources, see:
+ [Instance Types Available for Use With Amazon SageMaker Studio Classic Notebooks](notebooks-available-instance-types.md)
+ [Amazon SageMaker Images Available for Use With Studio Classic Notebooks](notebooks-available-images.md)

# Use the Studio Classic Notebook Toolbar
<a name="notebooks-menu"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic notebooks extend the JupyterLab interface. For an overview of the original JupyterLab interface, see [The JupyterLab Interface](https://jupyterlab.readthedocs.io/en/latest/user/interface.html).

The following image shows the toolbar and an empty cell from a Studio Classic notebook.

![\[SageMaker Studio Classic notebook menu.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-menu.png)


When you pause on a toolbar icon, a tooltip displays the icon function. Additional notebook commands are found in the Studio Classic main menu. The toolbar includes the following icons:


| Icon | Description | 
| --- | --- | 
|  ![\[The Save and checkpoint icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-save-and-checkpoint.png)  |  **Save and checkpoint** Saves the notebook and updates the checkpoint file. For more information, see [Get the Difference Between the Last Checkpoint](notebooks-diff.md#notebooks-diff-checkpoint).  | 
|  ![\[The Insert cell icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-insert-cell.png)  |  **Insert cell** Inserts a code cell below the current cell. The current cell is noted by the blue vertical marker in the left margin.  | 
|  ![\[The Cut, copy, and paste cells icons.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-cut-copy-paste.png)  |  **Cut, copy, and paste cells** Cuts, copies, and pastes the selected cells.  | 
|  ![\[The Run cells icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-run.png)  |  **Run cells** Runs the selected cells and then makes the cell that follows the last selected cell the new selected cell.  | 
|  ![\[The Interrupt kernel icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-interrupt-kernel.png)  |  **Interrupt kernel** Interrupts the kernel, which cancels the currently running operation. The kernel remains active.  | 
|  ![\[The Restart kernel icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-restart-kernel.png)  |  **Restart kernel** Restarts the kernel. Variables are reset. Unsaved information is not affected.  | 
|  ![\[The Restart kernel and run all cells icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-restart-kernel-run-all-cells.png)  |  **Restart kernel and run all cells** Restarts the kernel, then run all the cells of the notebook.  | 
|  ![\[The Cell type icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-cell-type.png)  |  **Cell type** Displays or changes the current cell type. The cell types are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-menu.html)  | 
|  ![\[The Launch terminal icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-launch-terminal.png)  |  **Launch terminal** Launches a terminal in the SageMaker image hosting the notebook. For an example, see [Get App Metadata](notebooks-run-and-manage-metadata.md#notebooks-run-and-manage-metadata-app).  | 
|  ![\[The Checkpoint diff icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-checkpoint-diff.png)  |  **Checkpoint diff** Opens a new tab that displays the difference between the notebook and the checkpoint file. For more information, see [Get the Difference Between the Last Checkpoint](notebooks-diff.md#notebooks-diff-checkpoint).  | 
|  ![\[The Git diff icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-git-diff.png)  |  **Git diff** Only enabled if the notebook is opened from a Git repository. Opens a new tab that displays the difference between the notebook and the last Git commit. For more information, see [Get the Difference Between the Last Commit](notebooks-diff.md#notebooks-diff-git).  | 
|  **2 vCPU \$1 4 GiB**  |  **Instance type** Displays or changes the instance type the notebook runs in. The format is as follows: `number of vCPUs + amount of memory + number of GPUs` `Unknown` indicates the notebook was opened without specifying a kernel. The notebook runs on the SageMaker Studio instance and doesn't accrue runtime charges. You can't assign the notebook to an instance type. You must specify a kernel and then Studio assigns the notebook to a default type. For more information, see [Create or Open an Amazon SageMaker Studio Classic Notebook](notebooks-create-open.md) and [Change the Instance Type for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-switch-instance-type.md).  | 
|  ![\[The Cluster icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-cluster.png)  |  **Cluster** Connect your notebook to an Amazon EMR cluster and scale your ETL jobs or run large-scale model training using Apache Spark, Hive, or Presto. For more information, see [Data preparation using Amazon EMR](studio-notebooks-emr-cluster.md).  | 
|  **Python 3 (Data Science)**  |  **Kernel and SageMaker Image** Displays or changes the kernel that processes the cells in the notebook. The format is as follows: `Kernel (SageMaker Image)` `No Kernel` indicates the notebook was opened without specifying a kernel. You can edit the notebook but you can't run any cells. For more information, see [Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-change-image.md).  | 
|  ![\[The Kernel busy status icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-kernel-status.png)  |  **Kernel busy status** Displays the busy status of the kernel. When the edge of the circle and its interior are the same color, the kernel is busy. The kernel is busy when it is starting and when it is processing cells. Additional kernel states are displayed in the status bar at the bottom-left corner of SageMaker Studio.  | 
|  ![\[The Share notebook icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-share.png)  |  **Share notebook** Shares the notebook. For more information, see [Share and Use an Amazon SageMaker Studio Classic Notebook](notebooks-sharing.md).  | 

To select multiple cells, click in the left margin outside of a cell. Hold down the `Shift` key and use `K` or the `Up` key to select previous cells, or use `J` or the `Down` key to select following cells.

# Install External Libraries and Kernels in Amazon SageMaker Studio Classic
<a name="studio-notebooks-add-external"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic notebooks come with multiple images already installed. These images contain kernels and Python packages including scikit-learn, Pandas, NumPy, TensorFlow, PyTorch, and MXNet. You can also install your own images that contain your choice of packages and kernels. For more information on installing your own image, see [Custom Images in Amazon SageMaker Studio Classic](studio-byoi.md).

The different Jupyter kernels in Amazon SageMaker Studio Classic notebooks are separate conda environments. For information about conda environments, see [Managing environments](https://conda.io/docs/user-guide/tasks/manage-environments.html).

## Package installation tools
<a name="studio-notebooks-external-tools"></a>

**Important**  
Currently, all packages in Amazon SageMaker notebooks are licensed for use with Amazon SageMaker AI and do not require additional commercial licenses. However, this might be subject to change in the future, and we recommend reviewing the licensing terms regularly for any updates.

The method that you use to install Python packages from the terminal differs depending on the image. Studio Classic supports the following package installation tools:
+ **Notebooks** – The following commands are supported. If one of the following does not work on your image, try the other one.
  + `%conda install`
  + `%pip install`
+ **The Jupyter terminal** – You can install packages using pip and conda directly. You can also use `apt-get install` to install system packages from the terminal.

**Note**  
We do not recommend using `pip install -u` or `pip install --user`, because those commands install packages on the user's Amazon EFS volume and can potentially block JupyterServer app restarts. Instead, use a lifecycle configuration to reinstall the required packages on app restarts as shown in [Install packages using lifecycle configurations](#nbi-add-external-lcc).

We recommend using `%pip` and `%conda` to install packages from within a notebook because they correctly take into account the active environment or interpreter being used. For more information, see [Add %pip and %conda magic functions](https://github.com/ipython/ipython/pull/11524). You can also use the system command syntax (lines starting with \$1) to install packages. For example, `!pip install` and `!conda install`. 

### Conda
<a name="studio-notebooks-add-external-tools-conda"></a>

Conda is an open source package management system and environment management system that can install packages and their dependencies. SageMaker AI supports using conda with the conda-forge channel. For more information, see [Conda channels](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/channels.html). The conda-forge channel is a community channel where contributors can upload packages.

**Note**  
Installing packages from conda-forge can take up to 10 minutes. Timing relates to how conda resolves the dependency graph.

All of the SageMaker AI provided environments are functional. User installed packages may not function correctly.

Conda has two methods for activating environments: `conda activate`, and `source activate`. For more information, see [Managing environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html).

**Supported conda operations**
+ `conda install` of a package in a single environment
+ `conda install` of a package in all environments
+ Installing a package from the main conda repository
+ Installing a package from conda-forge
+ Changing the conda install location to use Amazon EBS
+ Supporting both `conda activate` and `source activate`

### Pip
<a name="studio-notebooks-add-external-tools-pip"></a>

Pip is the tool for installing and managing Python packages. Pip searches for packages on the Python Package Index (PyPI) by default. Unlike conda, pip doesn't have built in environment support. Therfore, pip isn't as thorough as conda when it comes to packages with native or system library dependencies. Pip can be used to install packages in conda environments. You can use alternative package repositories with pip instead of the PyPI.

**Supported pip operations**
+ Using pip to install a package without an active conda environment
+ Using pip to install a package in a conda environment
+ Using pip to install a package in all conda environments
+ Changing the pip install location to use Amazon EBS
+ Using an alternative repository to install packages with pip

### Unsupported
<a name="studio-notebooks-add-external-tools-misc"></a>

SageMaker AI aims to support as many package installation operations as possible. However, if the packages were installed by SageMaker AI and you use the following operations on these packages, it might make your environment unstable:
+ Uninstalling
+ Downgrading
+ Upgrading

Due to potential issues with network conditions or configurations, or the availability of conda or PyPi, packages may not install in a fixed or deterministic amount of time.

**Note**  
Attempting to install a package in an environment with incompatible dependencies can result in a failure. If issues occur, you can contact the library maintainer about updating the package dependencies. When you modify the environment, such as removing or updating existing packages, this may result in instability of that environment.

## Install packages using lifecycle configurations
<a name="nbi-add-external-lcc"></a>

Install custom images and kernels on the Studio Classic instance's Amazon EBS volume so that they persist when you stop and restart the notebook, and that any external libraries you install are not updated by SageMaker AI. To do that, use a lifecycle configuration that includes both a script that runs when you create the notebook (`on-create)` and a script that runs each time you restart the notebook (`on-start`). For more information about using lifecycle configurations with Studio Classic, see [Use Lifecycle Configurations to Customize Amazon SageMaker Studio Classic](studio-lcc.md). For sample lifecycle configuration scripts, see [SageMaker AI Studio Classic Lifecycle Configuration Samples](https://github.com/aws-samples/sagemaker-studio-lifecycle-config-examples).

# Share and Use an Amazon SageMaker Studio Classic Notebook
<a name="notebooks-sharing"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can share your Amazon SageMaker Studio Classic notebooks with your colleagues. The shared notebook is a copy. After you share your notebook, any changes you make to your original notebook aren't reflected in the shared notebook and any changes your colleague's make in their shared copies of the notebook aren't reflected in your original notebook. If you want to share your latest version, you must create a new snapshot and then share it.

**Topics**
+ [Share a Notebook](#notebooks-sharing-share)
+ [Use a Shared Notebook](#notebooks-sharing-using)
+ [Shared spaces and realtime collaboration](#notebooks-sharing-rtc)

## Share a Notebook
<a name="notebooks-sharing-share"></a>

The following screenshot shows the menu from a Studio Classic notebook.

![\[The location of the Share icon in a Studio Classic notebook.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-menu-share.png)


**To share a notebook**

1. In the upper-right corner of the notebook, choose **Share**.

1. (Optional) In **Create shareable snapshot**, choose any of the following items:
   + **Include Git repo information** – Includes a link to the Git repository that contains the notebook. This enables you and your colleague to collaborate and contribute to the same Git repository.
   + **Include output** – Includes all notebook output that has been saved.
**Note**  
If you're an user in IAM Identity Center and you don't see these options, your IAM Identity Center administrator probably disabled the feature. Contact your administrator.

1. Choose **Create**.

1. After the snapshot is created, choose **Copy link** and then choose **Close**.

1. Share the link with your colleague.

After selecting your sharing options, you are provided with a URL. You can share this link with users that have access to Amazon SageMaker Studio Classic. When the user opens the URL, they're prompted to log in using IAM Identity Center or IAM authentication. This shared notebook becomes a copy, so changes made by the recipient will not be reproduced in your original notebook.

## Use a Shared Notebook
<a name="notebooks-sharing-using"></a>

You use a shared notebook in the same way you would with a notebook that you created yourself. You must first login to your account, then open the shared link. If you don't have an active session, you receive an error.

When you choose a link to a shared notebook for the first time, a read-only version of the notebook opens. To edit the shared notebook, choose **Create a Copy**. This copies the shared notebook to your personal storage.

The copied notebook launches on an instance of the instance type and SageMaker image that the notebook was using when the sender shared it. If you aren't currently running an instance of the instance type, a new instance is started. Customization to the SageMaker image isn't shared. You can also inspect the notebook snapshot by choosing **Snapshot Details**.

The following are some important considerations about sharing and authentication:
+ If you have an active session, you see a read-only view of the notebook until you choose **Create a Copy**.
+ If you don't have an active session, you need to log in.
+ If you use IAM to login, after you login, select your user profile then choose **Open Studio Classic**. Then you need to choose the link you were sent.
+ If you use IAM Identity Center to login, after you login the shared notebook is opened automatically in Studio.

## Shared spaces and realtime collaboration
<a name="notebooks-sharing-rtc"></a>

A shared space consists of a shared JupyterServer application and a shared directory. A key benefit of a shared space is that it facilitates collaboration between members of the shared space in real time. Users collaborating in a workspace get access to a shared Studio Classic application where they can access, read, and edit their notebooks in real time. Real time collaboration is only supported for JupyterServer applications within a shared space. Users with access to a shared space can simultaneously open, view, edit, and execute Jupyter notebooks in the shared Studio Classic application in that space. For more information about shared spaced and real time collaboration, see [Collaboration with shared spaces](domain-space.md).

# Get Amazon SageMaker Studio Classic Notebook and App Metadata
<a name="notebooks-run-and-manage-metadata"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can access notebook metadata and App metadata using the Amazon SageMaker Studio Classic UI.

**Topics**
+ [Get Studio Classic Notebook Metadata](#notebooks-run-and-manage-metadata-notebook)
+ [Get App Metadata](#notebooks-run-and-manage-metadata-app)

## Get Studio Classic Notebook Metadata
<a name="notebooks-run-and-manage-metadata-notebook"></a>

Jupyter notebooks contain optional metadata that you can access through the Amazon SageMaker Studio Classic UI.

**To view the notebook metadata:**

1. In the right sidebar, choose the **Property Inspector** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/gears.png)). 

1. Open the **Advanced Tools** section.

The metadata should look similar to the following.

```
{
    "instance_type": "ml.t3.medium",
    "kernelspec": {
        "display_name": "Python 3 (Data Science)",
        "language": "python",
        "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:<acct-id>:image/datascience-1.0"
    },
    "language_info": {
        "codemirror_mode": {
            "name": "ipython",
            "version": 3
        },
        "file_extension": ".py",
        "mimetype": "text/x-python",
        "name": "python",
        "nbconvert_exporter": "python",
        "pygments_lexer": "ipython3",
        "version": "3.7.10"
    }
}
```

## Get App Metadata
<a name="notebooks-run-and-manage-metadata-app"></a>

When you create a notebook in Amazon SageMaker Studio Classic, the App metadata is written to a file named `resource-metadata.json` in the folder `/opt/ml/metadata/`. You can get the App metadata by opening an Image terminal from within the notebook. The metadata gives you the following information, which includes the SageMaker image and instance type the notebook runs in:
+ **AppType** – `KernelGateway` 
+ **DomainId** – Same as the Studio ClassicID
+ **UserProfileName** – The profile name of the current user
+ **ResourceArn** – The Amazon Resource Name (ARN) of the App, which includes the instance type
+ **ResourceName** – The name of the SageMaker image

Additional metadata might be included for internal use by Studio Classic and is subject to change.

**To get the App metadata**

1. In the center of the notebook menu, choose the **Launch Terminal** icon (![\[Dollar sign icon representing currency or financial transactions.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-launch-terminal.png)). This opens a terminal in the SageMaker image that the notebook runs in.

1. Run the following commands to display the contents of the `resource-metadata.json` file.

   ```
   $ cd /opt/ml/metadata/
   cat resource-metadata.json
   ```

   The file should look similar to the following.

   ```
   {
       "AppType": "KernelGateway",
       "DomainId": "d-xxxxxxxxxxxx",
       "UserProfileName": "profile-name",
       "ResourceArn": "arn:aws:sagemaker:us-east-2:account-id:app/d-xxxxxxxxxxxx/profile-name/KernelGateway/datascience--1-0-ml-t3-medium",
       "ResourceName": "datascience--1-0-ml",
       "AppImageVersion":""
   }
   ```

# Get Notebook Differences in Amazon SageMaker Studio Classic
<a name="notebooks-diff"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can display the difference between the current notebook and the last checkpoint or the last Git commit using the Amazon SageMaker AI UI.

The following screenshot shows the menu from a Studio Classic notebook.

![\[The location of the relevant menu in a Studio Classic notebook.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-menu-diffs.png)


**Topics**
+ [Get the Difference Between the Last Checkpoint](#notebooks-diff-checkpoint)
+ [Get the Difference Between the Last Commit](#notebooks-diff-git)

## Get the Difference Between the Last Checkpoint
<a name="notebooks-diff-checkpoint"></a>

When you create a notebook, a hidden checkpoint file that matches the notebook is created. You can view changes between the notebook and the checkpoint file or revert the notebook to match the checkpoint file.

By default, a notebook is auto-saved every 120 seconds and also when you close the notebook. However, the checkpoint file isn't updated to match the notebook. To save the notebook and update the checkpoint file to match, you must choose the **Save notebook and create checkpoint** icon ( ![\[Padlock icon representing security or access control in cloud services.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-save-and-checkpoint.png)) on the left of the notebook menu or use the `Ctrl + S` keyboard shortcut.

To view the changes between the notebook and the checkpoint file, choose the **Checkpoint diff** icon (![\[Clock icon representing time or duration in a user interface.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-checkpoint-diff.png)) in the center of the notebook menu.

To revert the notebook to the checkpoint file, from the main Studio Classic menu, choose **File** then **Revert Notebook to Checkpoint**.

## Get the Difference Between the Last Commit
<a name="notebooks-diff-git"></a>

If a notebook is opened from a Git repository, you can view the difference between the notebook and the last Git commit.

To view the changes in the notebook from the last Git commit, choose the **Git diff** icon (![\[Dark button with white text displaying "git" in lowercase letters.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/notebook-git-diff.png)) in the center of the notebook menu.

# Manage Resources for Amazon SageMaker Studio Classic Notebooks
<a name="notebooks-run-and-manage"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can change the instance type, and SageMaker image and kernel from within an Amazon SageMaker Studio Classic notebook. To create a custom kernel to use with your notebooks, see [Custom Images in Amazon SageMaker Studio Classic](studio-byoi.md).

**Topics**
+ [Change the Instance Type for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-switch-instance-type.md)
+ [Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-change-image.md)
+ [Shut Down Resources from Amazon SageMaker Studio Classic](notebooks-run-and-manage-shut-down.md)

# Change the Instance Type for an Amazon SageMaker Studio Classic Notebook
<a name="notebooks-run-and-manage-switch-instance-type"></a>

When you open a new Studio Classic notebook for the first time, you are assigned a default Amazon Elastic Compute Cloud (Amazon EC2) instance type to run the notebook. When you open additional notebooks on the same instance type, the notebooks run on the same instance as the first notebook, even if the notebooks use different kernels. 

You can change the instance type that your Studio Classic notebook runs on from within the notebook. 

The following information only applies to Studio Classic notebooks. For information about how to change the instance type of a Amazon SageMaker notebook instance, see [Update a Notebook Instance](nbi-update.md).

**Important**  
If you change the instance type, unsaved information and existing settings for the notebook are lost, and installed packages must be re-installed.  
The previous instance type continues to run even if no kernel sessions or apps are active. You must explicitly stop the instance to stop accruing charges. To stop the instance, see [Shut down resources](notebooks-run-and-manage-shut-down.md#notebooks-run-and-manage-shut-down-sessions).

The following screenshot shows the menu from a Studio Classic notebook. The processor and memory of the instance type powering the notebook are displayed as **2 vCPU \$1 4 GiB**.

![\[The location of the processor and memory of the instance type for the Studio Classic notebook.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-menu-instance.png)


**To change the instance type**

1. Choose the processor and memory of the instance type powering the notebook. This opens a pop up window.

1. From the **Set up notebook environment** pop up window, select the **Instance type** dropdown menu.

1. From the **Instance type** dropdown, choose one of the instance types that are listed.

1. After choosing a type, choose **Select**.

1. Wait for the new instance to become enabled, and then the new instance type information is displayed.

For a list of the available instance types, see [Instance Types Available for Use With Amazon SageMaker Studio Classic Notebooks](notebooks-available-instance-types.md). 

# Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook
<a name="notebooks-run-and-manage-change-image"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

With Amazon SageMaker Studio Classic notebooks, you can change the notebook's image or kernel from within the notebook.

The following screenshot shows the menu from a Studio Classic notebook. The current SageMaker AI kernel and image are displayed as **Python 3 (Data Science)**, where `Python 3` denotes the kernel and `Data Science` denotes the SageMaker AI image that contains the kernel. The color of the circle to the right indicates the kernel is idle or busy. The kernel is busy when the center and the edge of the circle are the same color.

![\[The location of the current kernel and image in the menu bar from a Studio Classic notebook.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-notebook-menu-kernel.png)


**To change a notebook's image or kernel**

1. Choose the image/kernel name in the notebook menu.

1. From the **Set up notebook environment** pop up window, select the **Image** or **Kernel** dropdown menu.

1. From the dropdown menu, choose one of the images or kernels that are listed.

1. After choosing an image or kernel, choose **Select**.

1. Wait for the kernel's status to show as idle, which indicates the kernel has started.

For a list of available SageMaker images and kernels, see [Amazon SageMaker Images Available for Use With Studio Classic Notebooks](notebooks-available-images.md).

# Shut Down Resources from Amazon SageMaker Studio Classic
<a name="notebooks-run-and-manage-shut-down"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can shut down individual Amazon SageMaker AI resources, including notebooks, terminals, kernels, apps, and instances from Studio Classic. You can also shut down all of the resources in one of these categories at the same time. Amazon SageMaker Studio Classic does not support shutting down resources from within a notebook.

**Note**  
When you shut down a Studio Classic notebook instance, additional resources that you created in Studio Classic are not deleted. For example, additional resources can include SageMaker AI endpoints, Amazon EMR clusters, and Amazon S3 buckets. To stop the accrual of charges, you must manually delete these resources. For information about finding resources that are accruing charges, see [Analyzing your costs with AWS Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html).

The following topics demonstrate how to delete these SageMaker AI resources.

**Topics**
+ [Shut down an open notebook](#notebooks-run-and-manage-shut-down-notebook)
+ [Shut down resources](#notebooks-run-and-manage-shut-down-sessions)

## Shut down an open notebook
<a name="notebooks-run-and-manage-shut-down-notebook"></a>

When you shut down a Studio Classic notebook, the notebook is not deleted. The kernel that the notebook is running on is shut down and any unsaved information in the notebook is lost. You can shut down an open notebook from the Studio Classic **File** menu or from the Running Terminal and Kernels pane. The following procedure shows how to shut down an open notebook from the Studio Classic **File** menu.

**To shut down an open notebook from the File menu**

1. Launch Studio Classic by following the steps in [Launch Amazon SageMaker Studio Classic](studio-launch.md).

1. (Optional) Save the notebook contents by choosing **File**, then **Save Notebook**.

1. Choose **File**.

1. Choose **Close and Shutdown Notebook**. This opens a pop-up window.

1. From the pop-up window, choose **OK**.

## Shut down resources
<a name="notebooks-run-and-manage-shut-down-sessions"></a>

You can reach the **Running Terminals and Kernels** pane of Amazon SageMaker Studio Classic by selecting the **Running Terminals and Kernels** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/running-terminals-kernels.png)). The **Running Terminals and Kernels** pane consists of four sections. Each section lists all the resources of that type. You can shut down each resource individually or shut down all the resources in a section at the same time.

When you choose to shut down all resources in a section, the following occurs:
+ **RUNNING INSTANCES/RUNNING APPS** – All instances, apps, notebooks, kernel sessions, consoles/shells, and image terminals are shut down. System terminals aren't shut down.
+ **KERNEL SESSIONS** – All kernels, notebooks and consoles/shells are shut down.
+ **TERMINAL SESSIONS** – All image terminals and system terminals are shut down.

**To shut down resources**

1. Launch Studio Classic by following the steps in [Launch Amazon SageMaker Studio Classic](studio-launch.md).

1. Choose the **Running Terminals and Kernels** icon.

1. Do either of the following:
   + To shut down a specific resource, choose the **Shut Down** icon on the same row as the resource.

     For running instances, a confirmation dialog box lists all of the resources that SageMaker AI will shut down. A confirmation dialog box displays all running apps. To proceed, choose **Shut down all**.
**Note**  
A confirmation dialog box isn't displayed for kernel sessions or terminal sessions.
   + To shut down all resources in a section, choose the **X** to the right of the section label. A confirmation dialog box is displayed. Choose **Shut down all** to proceed.
**Note**  
When you shut down these Studio Classic resources, any additional resources created from Studio Classic, such as SageMaker AI endpoints, Amazon EMR clusters, and Amazon S3 buckets are not deleted. You must manually delete these resources to stop the accrual of charges. For information about finding resources that are accruing charges, see [Analyzing your costs with AWS Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html).

# Usage Metering for Amazon SageMaker Studio Classic Notebooks
<a name="notebooks-usage-metering"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

There is no additional charge for using Amazon SageMaker Studio Classic. The costs incurred for running Amazon SageMaker Studio Classic notebooks, interactive shells, consoles, and terminals are based on Amazon Elastic Compute Cloud (Amazon EC2) instance usage.

When you run the following resources, you must choose a SageMaker image and kernel:

**From the Studio Classic Launcher**
+ Notebook
+ Interactive Shell
+ Image Terminal

**From the **File** menu**
+ Notebook
+ Console

When launched, the resource is run on an Amazon EC2 instance of the chosen instance type. If an instance of that type was previously launched and is available, the resource is run on that instance.

For CPU based images, the default suggested instance type is `ml.t3.medium`. For GPU based images, the default suggested instance type is `ml.g4dn.xlarge`.

The costs incurred are based on the instance type. You are billed separately for each instance.

Metering starts when an instance is created. Metering ends when all the apps on the instance are shut down, or the instance is shut down. For information about how to shut down an instance, see [Shut Down Resources from Amazon SageMaker Studio Classic](notebooks-run-and-manage-shut-down.md).

**Important**  
You must shut down the instance to stop incurring charges. If you shut down the notebook running on the instance but don't shut down the instance, you will still incur charges. When you shut down the Studio Classic notebook instances, any additional resources, such as SageMaker AI endpoints, Amazon EMR clusters, and Amazon S3 buckets created from Studio Classic are not deleted. Delete those resources to stop accrual of charges.

When you open multiple notebooks on the same instance type, the notebooks run on the same instance even if they are using different kernels. You are billed only for the time that one instance is running.

You can change the instance type from within the notebook after you open it. For more information, see [Change the Instance Type for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-switch-instance-type.md).

For information about billing along with pricing examples, see [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/).

# Available Resources for Amazon SageMaker Studio Classic Notebooks
<a name="notebooks-resources"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following sections list the available resources for Amazon SageMaker Studio Classic notebooks.

**Topics**
+ [Instance Types Available for Use With Amazon SageMaker Studio Classic Notebooks](notebooks-available-instance-types.md)
+ [Amazon SageMaker Images Available for Use With Studio Classic Notebooks](notebooks-available-images.md)

# Instance Types Available for Use With Amazon SageMaker Studio Classic Notebooks
<a name="notebooks-available-instance-types"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic notebooks run on Amazon Elastic Compute Cloud (Amazon EC2) instances. The following Amazon EC2 instance types are available for use with Studio Classic notebooks. For detailed information on which instance types fit your use case, and their performance capabilities, see [Amazon Elastic Compute Cloud Instance types](https://aws.amazon.com/ec2/instance-types/). For information about pricing for these instance types, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

For information about available Amazon SageMaker Notebook Instance types, see [CreateNotebookInstance](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateNotebookInstance.html#sagemaker-CreateNotebookInstance-request-InstanceType).

**Note**  
For most use cases, you should use a `ml.t3.medium`. This is the default instance type for CPU-based SageMaker images, and is available as part of the [AWS Free Tier](https://aws.amazon.com/free).

**Topics**
+ [CPU instances](#notebooks-resources-no-gpu)
+ [Instances with 1 or more GPUs](#notebooks-resources-gpu)

## CPU instances
<a name="notebooks-resources-no-gpu"></a>

The following table lists the Amazon EC2 CPU instance types with no GPU attached that are available for use with Studio Classic notebooks. It also lists information about the specifications of each instance type. The default instance type for CPU-based images is `ml.t3.medium`. 

For detailed information on which instance types fit your use case, and their performance capabilities, see [Amazon Elastic Compute Cloud Instance types](https://aws.amazon.com/ec2/instance-types/). For information about pricing for these instance types, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

CPU instances


| Instance | Use case | Fast launch | vCPU | Memory (GiB) | Instance Storage (GB) | 
| --- | --- | --- | --- | --- | --- | 
| ml.t3.medium | General purpose | Yes | 2 | 4 | Amazon EBS Only | 
| ml.t3.large | General purpose | No | 2 | 8 | Amazon EBS Only | 
| ml.t3.xlarge | General purpose | No | 4 | 16 | Amazon EBS Only | 
| ml.t3.2xlarge | General purpose | No | 8 | 32 | Amazon EBS Only | 
| ml.m5.large | General purpose | Yes | 2 | 8 | Amazon EBS Only | 
| ml.m5.xlarge | General purpose | No | 4 | 16 | Amazon EBS Only | 
| ml.m5.2xlarge | General purpose | No | 8 | 32 | Amazon EBS Only | 
| ml.m5.4xlarge | General purpose | No | 16 | 64 | Amazon EBS Only | 
| ml.m5.8xlarge | General purpose | No | 32 | 128 | Amazon EBS Only | 
| ml.m5.12xlarge | General purpose | No | 48 | 192 | Amazon EBS Only | 
| ml.m5.16xlarge | General purpose | No | 64 | 256 | Amazon EBS Only | 
| ml.m5.24xlarge | General purpose | No | 96 | 384 | Amazon EBS Only | 
| ml.m5d.large | General purpose | No | 2 | 8 | 1 x 75 NVMe SSD | 
| ml.m5d.xlarge | General purpose | No | 4 | 16 | 1 x 150 NVMe SSD | 
| ml.m5d.2xlarge | General purpose | No | 8 | 32 | 1 x 300 NVMe SSD | 
| ml.m5d.4xlarge | General purpose | No | 16 | 64 | 2 x 300 NVMe SSD | 
| ml.m5d.8xlarge | General purpose | No | 32 | 128 | 2 x 600 NVMe SSD | 
| ml.m5d.12xlarge | General purpose | No | 48 | 192 | 2 x 900 NVMe SSD | 
| ml.m5d.16xlarge | General purpose | No | 64 | 256 | 4 x 600 NVMe SSD | 
| ml.m5d.24xlarge | General purpose | No | 96 | 384 | 4 x 900 NVMe SSD | 
| ml.c5.large | Compute optimized | Yes | 2 | 4 | Amazon EBS Only | 
| ml.c5.xlarge | Compute optimized | No | 4 | 8 | Amazon EBS Only | 
| ml.c5.2xlarge | Compute optimized | No | 8 | 16 | Amazon EBS Only | 
| ml.c5.4xlarge | Compute optimized | No | 16 | 32 | Amazon EBS Only | 
| ml.c5.9xlarge | Compute optimized | No | 36 | 72 | Amazon EBS Only | 
| ml.c5.12xlarge | Compute optimized | No | 48 | 96 | Amazon EBS Only | 
| ml.c5.18xlarge | Compute optimized | No | 72 | 144 | Amazon EBS Only | 
| ml.c5.24xlarge | Compute optimized | No | 96 | 192 | Amazon EBS Only | 
| ml.r5.large | Memory optimized | No | 2 | 16 | Amazon EBS Only | 
| ml.r5.xlarge | Memory optimized | No | 4 | 32 | Amazon EBS Only | 
| ml.r5.2xlarge | Memory optimized | No | 8 | 64 | Amazon EBS Only | 
| ml.r5.4xlarge | Memory optimized | No | 16 | 128 | Amazon EBS Only | 
| ml.r5.8xlarge | Memory optimized | No | 32 | 256 | Amazon EBS Only | 
| ml.r5.12xlarge | Memory optimized | No | 48 | 384 | Amazon EBS Only | 
| ml.r5.16xlarge | Memory optimized | No | 64 | 512 | Amazon EBS Only | 
| ml.r5.24xlarge | Memory optimized | No | 96 | 768 | Amazon EBS Only | 

## Instances with 1 or more GPUs
<a name="notebooks-resources-gpu"></a>

The following table lists the Amazon EC2 instance types with 1 or more GPUs attached that are available for use with Studio Classic notebooks. It also lists information about the specifications of each instance type. The default instance type for GPU-based images is `ml.g4dn.xlarge`. 

For detailed information on which instance types fit your use case, and their performance capabilities, see [Amazon Elastic Compute Cloud Instance types](https://aws.amazon.com/ec2/instance-types/). For information about pricing for these instance types, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

Instances with 1 or more GPUs


| Instance | Use case | Fast launch | GPUs | vCPU | Memory (GiB) | GPU Memory (GiB) | Instance Storage (GB) | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| ml.p3.2xlarge | Accelerated computing | No | 1 | 8 | 61 | 16 | Amazon EBS Only | 
| ml.p3.8xlarge | Accelerated computing | No | 4 | 32 | 244 | 64 | Amazon EBS Only | 
| ml.p3.16xlarge | Accelerated computing | No | 8 | 64 | 488 | 128 | Amazon EBS Only | 
| ml.p3dn.24xlarge | Accelerated computing | No | 8 | 96 | 768 | 256 | 2 x 900 NVMe SSD | 
| ml.p4d.24xlarge | Accelerated computing | No | 8 | 96 | 1152 | 320 GB HBM2 | 8 x 1000 NVMe SSD | 
| ml.p4de.24xlarge | Accelerated computing | No | 8 | 96 | 1152 | 640 GB HBM2e | 8 x 1000 NVMe SSD | 
| ml.g4dn.xlarge | Accelerated computing | Yes | 1 | 4 | 16 | 16 | 1 x 125 NVMe SSD | 
| ml.g4dn.2xlarge | Accelerated computing | No | 1 | 8 | 32 | 16 | 1 x 225 NVMe SSD | 
| ml.g4dn.4xlarge | Accelerated computing | No | 1 | 16 | 64 | 16 | 1 x 225 NVMe SSD | 
| ml.g4dn.8xlarge | Accelerated computing | No | 1 | 32 | 128 | 16 | 1 x 900 NVMe SSD | 
| ml.g4dn.12xlarge | Accelerated computing | No | 4 | 48 | 192 | 64 | 1 x 900 NVMe SSD | 
| ml.g4dn.16xlarge | Accelerated computing | No | 1 | 64 | 256 | 16 | 1 x 900 NVMe SSD | 
| ml.g5.xlarge | Accelerated computing | No | 1 | 4 | 16 | 24 | 1 x 250 NVMe SSD | 
| ml.g5.2xlarge | Accelerated computing | No | 1 | 8 | 32 | 24 | 1 x 450 NVMe SSD | 
| ml.g5.4xlarge | Accelerated computing | No | 1 | 16 | 64 | 24 | 1 x 600 NVMe SSD | 
| ml.g5.8xlarge | Accelerated computing | No | 1 | 32 | 128 | 24 | 1 x 900 NVMe SSD | 
| ml.g5.12xlarge | Accelerated computing | No | 4 | 48 | 192 | 96 | 1 x 3800 NVMe SSD | 
| ml.g5.16xlarge | Accelerated computing | No | 1 | 64 | 256 | 24 | 1 x 1900 NVMe SSD | 
| ml.g5.24xlarge | Accelerated computing | No | 4 | 96 | 384 | 96 | 1 x 3800 NVMe SSD | 
| ml.g5.48xlarge | Accelerated computing | No | 8 | 192 | 768 | 192 | 2 x 3800 NVMe SSD | 

# Amazon SageMaker Images Available for Use With Studio Classic Notebooks
<a name="notebooks-available-images"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

This page lists the SageMaker images and associated kernels that are available in Amazon SageMaker Studio Classic. This page also gives information about the format needed to create the ARN for each image. SageMaker images contain the latest [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable) and the latest version of the kernel. For more information, see [Deep Learning Containers Images](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html).

**Topics**
+ [Image ARN format](#notebooks-available-images-arn)
+ [Supported URI tags](#notebooks-available-uri-tag)
+ [Supported images](#notebooks-available-images-supported)
+ [Images slated for deprecation](#notebooks-available-images-deprecation)
+ [Deprecated images](#notebooks-available-images-deprecated)

## Image ARN format
<a name="notebooks-available-images-arn"></a>

The following table lists the image ARN and URI format for each Region. To create the full ARN for an image, replace the *resource-identifier* placeholder with the corresponding resource identifier for the image. The resource identifier is found in the SageMaker images and kernels table. To create the full URI for an image, replace the *tag* placeholder with the corresponding cpu or gpu tag. For the list of tags you can use, see [Supported URI tags](#notebooks-available-uri-tag).

**Note**  
SageMaker Distribution images use a distinct set of image ARNs, which are listed in the following table.


| Region | Image ARN Format | SageMaker Distribution Image ARN Format | SageMaker Distribution Image URI Format | 
| --- | --- | --- | --- | 
|  us-east-1  | arn:aws:sagemaker:us-east-1:081325390199:image/resource-identifier | arn:aws:sagemaker:us-east-1:885854791233:image/resource-identifier | 885854791233.dkr.ecr.us-east-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  us-east-2  | arn:aws:sagemaker:us-east-2:429704687514:image/resource-identifier | arn:aws:sagemaker:us-east-2:137914896644:image/resource-identifier | 137914896644.dkr.ecr.us-east-2.amazonaws.com/sagemaker-distribution-prod:tag | 
|  us-west-1  | arn:aws:sagemaker:us-west-1:742091327244:image/resource-identifier | arn:aws:sagemaker:us-west-1:053634841547:image/resource-identifier | 053634841547.dkr.ecr.us-west-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  us-west-2  | arn:aws:sagemaker:us-west-2:236514542706:image/resource-identifier | arn:aws:sagemaker:us-west-2:542918446943:image/resource-identifier | 542918446943.dkr.ecr.us-west-2.amazonaws.com/sagemaker-distribution-prod:tag | 
|  af-south-1  | arn:aws:sagemaker:af-south-1:559312083959:image/resource-identifier | arn:aws:sagemaker:af-south-1:238384257742:image/resource-identifier | 238384257742.dkr.ecr.af-south-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-east-1  | arn:aws:sagemaker:ap-east-1:493642496378:image/resource-identifier | arn:aws:sagemaker:ap-east-1:523751269255:image/resource-identifier | 523751269255.dkr.ecr.ap-east-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-south-1  | arn:aws:sagemaker:ap-south-1:394103062818:image/resource-identifier | arn:aws:sagemaker:ap-south-1:245090515133:image/resource-identifier | 245090515133.dkr.ecr.ap-south-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-northeast-2  | arn:aws:sagemaker:ap-northeast-2:806072073708:image/resource-identifier | arn:aws:sagemaker:ap-northeast-2:064688005998:image/resource-identifier | 064688005998.dkr.ecr.ap-northeast-2.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-southeast-1  | arn:aws:sagemaker:ap-southeast-1:492261229750:image/resource-identifier | arn:aws:sagemaker:ap-southeast-1:022667117163:image/resource-identifier | 022667117163.dkr.ecr.ap-southeast-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-southeast-2  | arn:aws:sagemaker:ap-southeast-2:452832661640:image/resource-identifier | arn:aws:sagemaker:ap-southeast-2:648430277019:image/resource-identifier | 648430277019.dkr.ecr.ap-southeast-2.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-northeast-1  |  arn:aws:sagemaker:ap-northeast-1:102112518831:image/resource-identifier |  arn:aws:sagemaker:ap-northeast-1:010972774902:image/resource-identifier | 010972774902.dkr.ecr.ap-northeast-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ca-central-1  | arn:aws:sagemaker:ca-central-1:310906938811:image/resource-identifier | arn:aws:sagemaker:ca-central-1:481561238223:image/resource-identifier | 481561238223.dkr.ecr.ca-central-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-central-1  | arn:aws:sagemaker:eu-central-1:936697816551:image/resource-identifier | arn:aws:sagemaker:eu-central-1:545423591354:image/resource-identifier | 545423591354.dkr.ecr.eu-central-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-west-1  | arn:aws:sagemaker:eu-west-1:470317259841:image/resource-identifier | arn:aws:sagemaker:eu-west-1:819792524951:image/resource-identifier | 819792524951.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-west-2  | arn:aws:sagemaker:eu-west-2:712779665605:image/resource-identifier | arn:aws:sagemaker:eu-west-2:021081402939:image/resource-identifier | 021081402939.dkr.ecr.eu-west-2.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-west-3  | arn:aws:sagemaker:eu-west-3:615547856133:image/resource-identifier | arn:aws:sagemaker:eu-west-3:856416204555:image/resource-identifier | 856416204555.dkr.ecr.eu-west-3.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-north-1  | arn:aws:sagemaker:eu-north-1:243637512696:image/resource-identifier | arn:aws:sagemaker:eu-north-1:175620155138:image/resource-identifier | 175620155138.dkr.ecr.eu-north-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  eu-south-1  | arn:aws:sagemaker:eu-south-1:592751261982:image/resource-identifier | arn:aws:sagemaker:eu-south-1:810671768855:image/resource-identifier | 810671768855.dkr.ecr.eu-south-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  sa-east-1  | arn:aws:sagemaker:sa-east-1:782484402741:image/resource-identifier | arn:aws:sagemaker:sa-east-1:567556641782:image/resource-identifier | 567556641782.dkr.ecr.sa-east-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-northeast-3  | arn:aws:sagemaker:ap-northeast-3:792733760839:image/resource-identifier | arn:aws:sagemaker:ap-northeast-3:564864627153:image/resource-identifier | 564864627153.dkr.ecr.ap-northeast-3.amazonaws.com/sagemaker-distribution-prod:tag | 
|  ap-southeast-3  | arn:aws:sagemaker:ap-southeast-3:276181064229:image/resource-identifier | arn:aws:sagemaker:ap-southeast-3:370607712162:image/resource-identifier | 370607712162.dkr.ecr.ap-southeast-3.amazonaws.com/sagemaker-distribution-prod:tag | 
|  me-south-1  | arn:aws:sagemaker:me-south-1:117516905037:image/resource-identifier | arn:aws:sagemaker:me-south-1:523774347010:image/resource-identifier | 523774347010.dkr.ecr.me-south-1.amazonaws.com/sagemaker-distribution-prod:tag | 
|  me-central-1  | arn:aws:sagemaker:me-central-1:103105715889:image/resource-identifier | arn:aws:sagemaker:me-central-1:358593528301:image/resource-identifier | 358593528301.dkr.ecr.me-central-1.amazonaws.com/sagemaker-distribution-prod:tag | 

## Supported URI tags
<a name="notebooks-available-uri-tag"></a>

The following list shows the tags you can include in your image URI.
+ 1-cpu
+ 1-gpu
+ 0-cpu
+ 0-gpu

**The following examples show URIs with various tag formats:**
+ 542918446943.dkr.ecr.us-west-2.amazonaws.com/sagemaker-distribution-prod:1-cpu
+ 542918446943.dkr.ecr.us-west-2.amazonaws.com/sagemaker-distribution-prod:0-gpu

## Supported images
<a name="notebooks-available-images-supported"></a>

The following table gives information about the SageMaker images and associated kernels that are available in Amazon SageMaker Studio Classic. It also gives information about the resource identifier and Python version included in the image.

SageMaker images and kernels


| SageMaker Image | Description | Resource Identifier | Kernels (and Identifier) | Python Version | 
| --- | --- | --- | --- | --- | 
| Base Python 4.3 | Official Python 3.11 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-v4 | Python 3 (python3) | Python 3.11 | 
| Base Python 4.2 | Official Python 3.11 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-v4 | Python 3 (python3) | Python 3.11 | 
| Base Python 4.1 | Official Python 3.11 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-v4 | Python 3 (python3) | Python 3.11 | 
| Base Python 4.0 | Official Python 3.11 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-v4 | Python 3 (python3) | Python 3.11 | 
| Base Python 3.0 | Official Python 3.10 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-310-v1 | Python 3 (python3) | Python 3.10 | 
| Data Science 5.3 | Data Science 5.3 is a Python 3.11 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version jammy-20240212. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-v5 | Python 3 (python3) | Python 3.11 | 
| Data Science 5.2 | Data Science 5.2 is a Python 3.11 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version jammy-20240212. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-v5 | Python 3 (python3) | Python 3.11 | 
| Data Science 5.1 | Data Science 5.1 is a Python 3.11 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version jammy-20240212. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-v5 | Python 3 (python3) | Python 3.11 | 
| Data Science 5.0 | Data Science 5.0 is a Python 3.11 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version jammy-20240212. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-v5 | Python 3 (python3) | Python 3.11 | 
| Data Science 4.0 | Data Science 4.0 is a Python 3.11 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version 22.04. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-311-v1 | Python 3 (python3) | Python 3.11 | 
| Data Science 3.0 | Data Science 3.0 is a Python 3.10 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version 22.04. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-310-v1 | Python 3 (python3) | Python 3.10 | 
| Geospatial 1.0 | Amazon SageMaker geospatial is a Python image consisting of commonly used geospatial libraries such as GDAL, Fiona, GeoPandas, Shapley, and Rasterio. It allows you to visualize geospatial data within SageMaker AI. For more information, see [Amazon SageMaker geospatial Notebook SDK](https://docs.aws.amazon.com/sagemaker/latest/dg/geospatial-notebook-sdk.html) | sagemaker-geospatial-1.0 | Python 3 (python3) | Python 3.10 | 
| SparkAnalytics 4.3 | The SparkAnalytics 4.3 image provides Spark and PySpark kernel options on Amazon SageMaker Studio Classic, including SparkMagic Spark, SparkMagic PySpark, Glue Spark, and Glue PySpark, enabling flexible distributed data processing. | sagemaker-spark-analytics-v4 |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.11 | 
| SparkAnalytics 4.2 | The SparkAnalytics 4.2 image provides Spark and PySpark kernel options on Amazon SageMaker Studio Classic, including SparkMagic Spark, SparkMagic PySpark, Glue Spark, and Glue PySpark, enabling flexible distributed data processing. | sagemaker-spark-analytics-v4 |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.11 | 
| SparkAnalytics 4.1 | The SparkAnalytics 4.1 image provides Spark and PySpark kernel options on Amazon SageMaker Studio Classic, including SparkMagic Spark, SparkMagic PySpark, Glue Spark, and Glue PySpark, enabling flexible distributed data processing. | sagemaker-spark-analytics-v4 |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.11 | 
| SparkAnalytics 4.0 | The SparkAnalytics 4.0 image provides Spark and PySpark kernel options on Amazon SageMaker Studio Classic, including SparkMagic Spark, SparkMagic PySpark, Glue Spark, and Glue PySpark, enabling flexible distributed data processing. | sagemaker-spark-analytics-v4 |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.11 | 
| SparkAnalytics 3.0 | The SparkAnalytics 3.0 image provides Spark and PySpark kernel options on Amazon SageMaker Studio Classic, including SparkMagic Spark, SparkMagic PySpark, Glue Spark, and Glue PySpark, enabling flexible distributed data processing. | sagemaker-sparkanalytics-311-v1 | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html) | Python 3.11 | 
| SparkAnalytics 2.0 | Anaconda Individual Edition with PySpark and Spark kernels. For more information, see [sparkmagic](https://github.com/jupyter-incubator/sparkmagic). | sagemaker-sparkanalytics-310-v1 | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html) | Python 3.10 | 
| PyTorch 2.4.0 Python 3.11 CPU Optimized | The AWS Deep Learning Containers for PyTorch 2.4.0 with CUDA 12.4 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.4.0-cpu-py311 | Python 3 (python3) | Python 3.11 | 
| PyTorch 2.4.0 Python 3.11 GPU Optimized | The AWS Deep Learning Containers for PyTorch 2.4.0 with CUDA 12.4 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.4.0-gpu-py311 | Python 3 (python3) | Python 3.11 | 
| PyTorch 2.3.0 Python 3.11 CPU Optimized | The AWS Deep Learning Containers for PyTorch 2.3.0 with CUDA 12.1 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.3.0-cpu-py311 | Python 3 (python3) | Python 3.11 | 
| PyTorch 2.3.0 Python 3.11 GPU Optimized | The AWS Deep Learning Containers for PyTorch 2.3.0 with CUDA 12.1 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.3.0-gpu-py311 | Python 3 (python3) | Python 3.11 | 
| PyTorch 2.2.0 Python 3.10 CPU Optimized | The AWS Deep Learning Containers for PyTorch 2.2 with CUDA 12.1 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.2.0-cpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.2.0 Python 3.10 GPU Optimized | The AWS Deep Learning Containers for PyTorch 2.2 with CUDA 12.1 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.2.0-gpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.1.0 Python 3.10 CPU Optimized | The AWS Deep Learning Containers for PyTorch 2.1 with CUDA 12.1 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.1.0-cpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.1.0 Python 3.10 GPU Optimized | The AWS Deep Learning Containers for PyTorch 2.1 with CUDA 12.1 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.1.0-gpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 1.13 HuggingFace Python 3.10 Neuron Optimized | PyTorch 1.13 image with HuggingFace and Neuron packages installed for training on Trainium instances optimized for performance and scale on AWS. | pytorch-1.13-hf-neuron-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 1.13 Python 3.10 Neuron Optimized | PyTorch 1.13 image with Neuron packages installed for training on Trainium instances optimized for performance and scale on AWS. | pytorch-1.13-neuron-py310 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.14.0 Python 3.10 CPU Optimized | The AWS Deep Learning Containers for TensorFlow 2.14 with CUDA 11.8 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.14.1-cpu-py310-ubuntu20.04-sagemaker-v1.0 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.14.0 Python 3.10 GPU Optimized | The AWS Deep Learning Containers for TensorFlow 2.14 with CUDA 11.8 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.14.1-gpu-py310-cu118-ubuntu20.04-sagemaker-v1.0 | Python 3 (python3) | Python 3.10 | 

## Images slated for deprecation
<a name="notebooks-available-images-deprecation"></a>

SageMaker AI ends support for images the day after any of the packages in the image reach end-of life by their publisher. The following SageMaker images are slated for deprecation. 

Images based on Python 3.8 reached [end-of-life](https://endoflife.date/python) on October 31st, 2024. Starting on November 1, 2024, SageMaker AI will discontinue support for these images and they will not be selectable from the Studio Classic UI. To avoid non-compliance issues, if you're using any of these images, we recommend that you move to an image with a later version.

SageMaker images slated for deprecation


| SageMaker Image | Deprecation date | Description | Resource Identifier | Kernels | Python Version | 
| --- | --- | --- | --- | --- | --- | 
| SageMaker Distribution v0.12 CPU | November 1, 2024 | SageMaker Distribution v0 CPU is a Python 3.8 image that includes popular frameworks for machine learning, data science and visualization on CPU. This includes deep learning frameworks like PyTorch, TensorFlow and Keras; popular Python packages like numpy, scikit-learn and pandas; and IDEs like Jupyter Lab. For more information, see the [Amazon SageMaker AI Distribution](https://github.com/aws/sagemaker-distribution) repo.  | sagemaker-distribution-cpu-v0 | Python 3 (python3) | Python 3.8 | 
| SageMaker Distribution v0.12 GPU | November 1, 2024 | SageMaker Distribution v0 GPU is a Python 3.8 image that includes popular frameworks for machine learning, data science and visualization on GPU. This includes deep learning frameworks like PyTorch, TensorFlow and Keras; popular Python packages like numpy, scikit-learn and pandas; and IDEs like Jupyter Lab. For more information, see the [Amazon SageMaker AI Distribution](https://github.com/aws/sagemaker-distribution) repo.  | sagemaker-distribution-gpu-v0 | Python 3 (python3) | Python 3.8 | 
| Base Python 2.0 | November 1, 2024 | Official Python 3.8 image from DockerHub with boto3 and AWS CLI included. | sagemaker-base-python-38 | Python 3 (python3) | Python 3.8 | 
| Data Science 2.0 | November 1, 2024 | Data Science 2.0 is a Python 3.8 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image based on Ubuntu version 22.04. It includes the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | sagemaker-data-science-38 | Python 3 (python3) | Python 3.8 | 
| PyTorch 1.13 Python 3.9 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.13 with CUDA 11.3 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-1.13-cpu-py39 | Python 3 (python3) | Python 3.9 | 
| PyTorch 1.13 Python 3.9 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.13 with CUDA 11.7 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-1.13-gpu-py39 | Python 3 (python3) | Python 3.9 | 
| PyTorch 1.12 Python 3.8 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.12 with CUDA 11.3 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for PyTorch 1.12.0](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-pytorch-1-12-0-on-sagemaker/). | pytorch-1.12-cpu-py38 | Python 3 (python3) | Python 3.8 | 
| PyTorch 1.12 Python 3.8 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.12 with CUDA 11.3 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for PyTorch 1.12.0](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-pytorch-1-12-0-on-sagemaker/). | pytorch-1.12-gpu-py38 | Python 3 (python3) | Python 3.8 | 
| PyTorch 1.10 Python 3.8 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.10 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for PyTorch 1.10.2 on SageMaker AI](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-pytorch-1-10-2-on-sagemaker/). | pytorch-1.10-cpu-py38 | Python 3 (python3) | Python 3.8 | 
| PyTorch 1.10 Python 3.8 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 1.10 with CUDA 11.3 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for PyTorch 1.10.2 on SageMaker AI](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-pytorch-1-10-2-on-sagemaker/). | pytorch-1.10-gpu-py38 | Python 3 (python3) | Python 3.8 | 
| SparkAnalytics 1.0 | November 1, 2024 | Anaconda Individual Edition with PySpark and Spark kernels. For more information, see [sparkmagic](https://github.com/jupyter-incubator/sparkmagic). | sagemaker-sparkanalytics-v1 |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.8 | 
| TensorFlow 2.13.0 Python 3.10 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.13 with CUDA 11.8 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers.](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.13.0-cpu-py310-ubuntu20.04-sagemaker-v1.0 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.13.0 Python 3.10 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.13 with CUDA 11.8 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers.](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html) | tensorflow-2.13.0-gpu-py310-cu118-ubuntu20.04-sagemaker-v1.0 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.6 Python 3.8 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.6 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for TensorFlow 2.6](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-tensorflow-2-6/). | tensorflow-2.6-cpu-py38-ubuntu20.04-v1 | Python 3 (python3) | Python 3.8 | 
| TensorFlow 2.6 Python 3.8 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.6 with CUDA 11.2 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for TensorFlow 2.6](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-tensorflow-2-6/). | tensorflow-2.6-gpu-py38-cu112-ubuntu20.04-v1 | Python 3 (python3) | Python 3.8 | 
| PyTorch 2.0.1 Python 3.10 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 2.0.1 with CUDA 12.1 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.0.1-cpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.0.1 Python 3.10 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 2.0.1 with CUDA 12.1 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.0.1-gpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.0.0 Python 3.10 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 2.0.0 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.0.0-cpu-py310 | Python 3 (python3) | Python 3.10 | 
| PyTorch 2.0.0 Python 3.10 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for PyTorch 2.0.0 with CUDA 11.8 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | pytorch-2.0.0-gpu-py310 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.12.0 Python 3.10 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.12.0 with CUDA 11.2 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.12.0-cpu-py310-ubuntu20.04-sagemaker-v1.0 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.12.0 Python 3.10 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.12.0 with CUDA 11.8 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.12.0-gpu-py310-cu118-ubuntu20.04-sagemaker-v1 | Python 3 (python3) | Python 3.10 | 
| TensorFlow 2.11.0 Python 3.9 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.11.0 with CUDA 11.2 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.11.0-cpu-py39-ubuntu20.04-sagemaker-v1.1 | Python 3 (python3) | Python 3.9 | 
| TensorFlow 2.11.0 Python 3.9 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.11.0 with CUDA 11.2 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.11.0-gpu-py39-cu112-ubuntu20.04-sagemaker-v1.1 | Python 3 (python3) | Python 3.9 | 
| TensorFlow 2.10 Python 3.9 CPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.10 with CUDA 11.2 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.10.1-cpu-py39-ubuntu20.04-sagemaker-v1.2 | Python 3 (python3) | Python 3.9 | 
| TensorFlow 2.10 Python 3.9 GPU Optimized | November 1, 2024 | The AWS Deep Learning Containers for TensorFlow 2.10 with CUDA 11.2 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [Release Notes for Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html). | tensorflow-2.10.1-gpu-py39-ubuntu20.04-sagemaker-v1.2 | Python 3 (python3) | Python 3.9 | 

## Deprecated images
<a name="notebooks-available-images-deprecated"></a>

SageMaker AI has ended support for the following images. Deprecation occurs the day after any of the packages in the image reach end-of life by their publisher.

SageMaker images slated for deprecation


| SageMaker Image | Deprecation date | Description | Resource Identifier | Kernels | Python Version | 
| --- | --- | --- | --- | --- | --- | 
| Data Science | October 30, 2023 | Data Science is a Python 3.7 [conda](https://docs.conda.io/projects/conda/en/latest/index.html) image with the most commonly used Python packages and libraries, such as NumPy and SciKit Learn. | datascience-1.0 | Python 3 | Python 3.7 | 
| SageMaker JumpStart Data Science 1.0 | October 30, 2023 | SageMaker JumpStart Data Science 1.0 is a JumpStart image that includes commonly used packages and libraries. | sagemaker-jumpstart-data-science-1.0 | Python 3 | Python 3.7 | 
| SageMaker JumpStart MXNet 1.0 | October 30, 2023 | SageMaker JumpStart MXNet 1.0 is a JumpStart image that includes MXNet. | sagemaker-jumpstart-mxnet-1.0 | Python 3 | Python 3.7 | 
| SageMaker JumpStart PyTorch 1.0 | October 30, 2023 | SageMaker JumpStart PyTorch 1.0 is a JumpStart image that includes PyTorch. | sagemaker-jumpstart-pytorch-1.0 | Python 3 | Python 3.7 | 
| SageMaker JumpStart TensorFlow 1.0 | October 30, 2023 | SageMaker JumpStart TensorFlow 1.0 is a JumpStart image that includes TensorFlow. | sagemaker-jumpstart-tensorflow-1.0 | Python 3 | Python 3.7 | 
| SparkMagic | October 30, 2023 | Anaconda Individual Edition with PySpark and Spark kernels. For more information, see [sparkmagic](https://github.com/jupyter-incubator/sparkmagic). | sagemaker-sparkmagic |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html)  | Python 3.7 | 
| TensorFlow 2.3 Python 3.7 CPU Optimized | October 30, 2023 | The AWS Deep Learning Containers for TensorFlow 2.3 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers with TensorFlow 2.3.0](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-with-tensorflow-2-3-0/). | tensorflow-2.3-cpu-py37-ubuntu18.04-v1 | Python 3 | Python 3.7 | 
| TensorFlow 2.3 Python 3.7 GPU Optimized | October 30, 2023 | The AWS Deep Learning Containers for TensorFlow 2.3 with CUDA 11.0 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers for TensorFlow 2.3.1 with CUDA 11.0](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-for-tensorflow-2-3-1-with-cuda-11-0/). | tensorflow-2.3-gpu-py37-cu110-ubuntu18.04-v3 | Python 3 | Python 3.7 | 
| TensorFlow 1.15 Python 3.7 CPU Optimized | October 30, 2023 | The AWS Deep Learning Containers for TensorFlow 1.15 include containers for training on CPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers v7.0 for TensorFlow](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-v7-0-for-tensorflow/). | tensorflow-1.15-cpu-py37-ubuntu18.04-v7 | Python 3 | Python 3.7 | 
| TensorFlow 1.15 Python 3.7 GPU Optimized | October 30, 2023 | The AWS Deep Learning Containers for TensorFlow 1.15 with CUDA 11.0 include containers for training on GPU, optimized for performance and scale on AWS. For more information, see [AWS Deep Learning Containers v7.0 for TensorFlow](https://aws.amazon.com/releasenotes/aws-deep-learning-containers-v7-0-for-tensorflow/). | tensorflow-1.15-gpu-py37-cu110-ubuntu18.04-v8 | Python 3 | Python 3.7 | 

# Customize Amazon SageMaker Studio Classic
<a name="studio-customize"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

There are four options for customizing your Amazon SageMaker Studio Classic environment. You bring your own SageMaker image, use a lifecycle configuration script, attach suggested Git repos to Studio Classic, or create kernels using persistent Conda environments in Amazon EFS. Use each option individually, or together. 
+ **Bring your own SageMaker image:** A SageMaker image is a file that identifies the kernels, language packages, and other dependencies required to run a Jupyter notebook in Amazon SageMaker Studio Classic. Amazon SageMaker AI provides many built-in images for you to use. If you need different functionality, you can bring your own custom images to Studio Classic.
+ **Use lifecycle configurations with Amazon SageMaker Studio Classic:** Lifecycle configurations are shell scripts triggered by Amazon SageMaker Studio Classic lifecycle events, such as starting a new Studio Classic notebook. You can use lifecycle configurations to automate customization for your Studio Classic environment. For example, you can install custom packages, configure notebook extensions, preload datasets, and set up source code repositories.
+ **Attach suggested Git repos to Studio Classic:** You can attach suggested Git repository URLs at the Amazon SageMaker AI domain or user profile level. Then, you can select the repo URL from the list of suggestions and clone that into your environment using the Git extension in Studio Classic. 
+ **Persist Conda environments to the Studio Classic Amazon EFS volume:** Studio Classic uses an Amazon EFS volume as a persistent storage layer. You can save your Conda environment on this Amazon EFS volume, then use the saved environment to create kernels. Studio Classic automatically picks up all valid environments saved in Amazon EFS as KernelGateway kernels. These kernels persist through restart of the kernel, app, and Studio Classic. For more information, see the **Persist Conda environments to the Studio Classic EFS volume** section in [Four approaches to manage Python packages in Amazon SageMaker Studio Classic notebooks](https://aws.amazon.com/blogs/machine-learning/four-approaches-to-manage-python-packages-in-amazon-sagemaker-studio-notebooks/).

The following topics show how to use these three options to customize your Amazon SageMaker Studio Classic environment.

**Topics**
+ [Custom Images in Amazon SageMaker Studio Classic](studio-byoi.md)
+ [Use Lifecycle Configurations to Customize Amazon SageMaker Studio Classic](studio-lcc.md)
+ [Attach Suggested Git Repos to Amazon SageMaker Studio Classic](studio-git-attach.md)

# Custom Images in Amazon SageMaker Studio Classic
<a name="studio-byoi"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

A SageMaker image is a file that identifies the kernels, language packages, and other dependencies required to run a Jupyter notebook in Amazon SageMaker Studio Classic. These images are used to create an environment that you then run Jupyter notebooks from. Amazon SageMaker AI provides many built-in images for you to use. For the list of built-in images, see [Amazon SageMaker Images Available for Use With Studio Classic Notebooks](notebooks-available-images.md).

If you need different functionality, you can bring your own custom images to Studio Classic. You can create images and image versions, and attach image versions to your domain or shared space, using the SageMaker AI control panel, the [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html), and the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/reference/sagemaker/). You can also create images and image versions using the SageMaker AI console, even if you haven't onboarded to a SageMaker AI domain. SageMaker AI provides sample Dockerfiles to use as a starting point for your custom SageMaker images in the [SageMaker Studio Classic Custom Image Samples](https://github.com/aws-samples/sagemaker-studio-custom-image-samples/) repository.

The following topics explain how to bring your own image using the SageMaker AI console or AWS CLI, then launch the image in Studio Classic. For a similar blog article, see [Bringing your own R environment to Amazon SageMaker Studio Classic](https://aws.amazon.com/blogs/machine-learning/bringing-your-own-r-environment-to-amazon-sagemaker-studio/). For notebooks that show how to bring your own image for use in training and inference, see [Amazon SageMaker Studio Classic Container Build CLI](https://github.com/aws/amazon-sagemaker-examples/tree/main/aws_sagemaker_studio/sagemaker_studio_image_build).

## Key terminology
<a name="studio-byoi-basics"></a>

The following section defines key terms for bringing your own image to use with Studio Classic.
+ **Dockerfile:** A Dockerfile is a file that identifies the language packages and other dependencies for your Docker image.
+ **Docker image:** The Docker image is a built Dockerfile. This image is checked into Amazon ECR and serves as the basis of the SageMaker AI image.
+ **SageMaker image:** A SageMaker image is a holder for a set of SageMaker AI image versions based on Docker images. Each image version is immutable.
+ **Image version:** An image version of a SageMaker image represents a Docker image and is stored in an Amazon ECR repository. Each image version is immutable. These image versions can be attached to a domain or shared space and used with Studio Classic.

**Topics**
+ [Key terminology](#studio-byoi-basics)
+ [Custom SageMaker Image Specifications for Amazon SageMaker Studio Classic](studio-byoi-specs.md)
+ [Prerequisites for Custom Images in Amazon SageMaker Studio Classic](studio-byoi-prereq.md)
+ [Add a Docker Image Compatible with Amazon SageMaker Studio Classic to Amazon ECR](studio-byoi-sdk-add-container-image.md)
+ [Create a Custom SageMaker Image for Amazon SageMaker Studio Classic](studio-byoi-create.md)
+ [Attach a Custom SageMaker Image in Amazon SageMaker Studio Classic](studio-byoi-attach.md)
+ [Launch a Custom SageMaker Image in Amazon SageMaker Studio Classic](studio-byoi-launch.md)
+ [Clean Up Resources for Custom Images in Amazon SageMaker Studio Classic](studio-byoi-cleanup.md)

# Custom SageMaker Image Specifications for Amazon SageMaker Studio Classic
<a name="studio-byoi-specs"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following specifications apply to the container image that is represented by a SageMaker AI image version.

**Running the image**  
`ENTRYPOINT` and `CMD` instructions are overridden to enable the image to run as a KernelGateway app.  
Port 8888 in the image is reserved for running the KernelGateway web server.

**Stopping the image**  
The `DeleteApp` API issues the equivalent of a `docker stop` command. Other processes in the container won’t get the SIGKILL/SIGTERM signals.

**Kernel discovery**  
SageMaker AI recognizes kernels as defined by Jupyter [kernel specs](https://jupyter-client.readthedocs.io/en/latest/kernels.html#kernelspecs).  
You can specify a list of kernels to display before running the image. If not specified, python3 is displayed. Use the [DescribeAppImageConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeAppImageConfig.html) API to view the list of kernels.  
Conda environments are recognized as kernel specs by default. 

**File system**  
The `/opt/.sagemakerinternal` and `/opt/ml` directories are reserved. Any data in these directories might not be visible at runtime.

**User data**  
Each user in a domain gets a user directory on a shared Amazon Elastic File System volume in the image. The location of the current user's directory on the Amazon EFS volume is configurable. By default, the location of the directory is `/home/sagemaker-user`.  
SageMaker AI configures POSIX UID/GID mappings between the image and the host. This defaults to mapping the root user's UID/GID (0/0) to the UID/GID on the host.  
You can specify these values using the [CreateAppImageConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAppImageConfig.html) API.

**GID/UID limits**  
Amazon SageMaker Studio Classic only supports the following `DefaultUID` and `DefaultGID` combinations:   
+  DefaultUID: 1000 and DefaultGID: 100, which corresponds to a non-priveleged user.
+  DefaultUID: 0 and DefaultGID: 0, which corresponds to root access.

**Metadata**  
A metadata file is located at `/opt/ml/metadata/resource-metadata.json`. No additional environment variables are added to the variables defined in the image. For more information, see [Get App Metadata](notebooks-run-and-manage-metadata.md#notebooks-run-and-manage-metadata-app).

**GPU**  
On a GPU instance, the image is run with the `--gpus` option. Only the CUDA toolkit should be included in the image not the NVIDIA drivers. For more information, see [NVIDIA User Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html).

**Metrics and logging**  
Logs from the KernelGateway process are sent to Amazon CloudWatch in the customer’s account. The name of the log group is `/aws/sagemaker/studio`. The name of the log stream is `$domainID/$userProfileName/KernelGateway/$appName`.

**Image size**  
Limited to 35 GB. To view the size of your image, run `docker image ls`.  


## Sample Dockerfile
<a name="studio-byoi-specs-sample"></a>

The following sample Dockerfile creates an image based Amazon Linux 2, installs third party packages and the `python3` kernel, and sets the scope to the non-privileged user.

```
FROM public.ecr.aws/amazonlinux/amazonlinux:2

ARG NB_USER="sagemaker-user"
ARG NB_UID="1000"
ARG NB_GID="100"

RUN \
    yum install --assumeyes python3 shadow-utils && \
    useradd --create-home --shell /bin/bash --gid "${NB_GID}" --uid ${NB_UID} ${NB_USER} && \
    yum clean all && \
    jupyter-activity-monitor-extension \
    python3 -m pip install ipykernel && \
    python3 -m ipykernel install

USER ${NB_UID}
```

# Prerequisites for Custom Images in Amazon SageMaker Studio Classic
<a name="studio-byoi-prereq"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You must satisfy the following prerequisites to bring your own container for use with Amazon SageMaker Studio Classic.
+ The Docker application. For information about setting up Docker, see [Orientation and setup](https://docs.docker.com/get-started/).
+ Install the AWS CLI by following the steps in [Getting started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html).
+ A local copy of any Dockerfile for creating a Studio Classic compatible image. For sample custom images, see the [SageMaker AI Studio Classic custom image samples](https://github.com/aws-samples/sagemaker-studio-custom-image-samples/) repository.
+ Permissions to access the Amazon Elastic Container Registry (Amazon ECR) service. For more information, see [Amazon ECR Managed Policies](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html).
+ An AWS Identity and Access Management execution role that has the [AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) policy attached. If you have onboarded to Amazon SageMaker AI domain, you can get the role from the **Domain Summary** section of the SageMaker AI control panel.
+ Install the Studio Classic image build CLI by following the steps in [SageMaker Docker Build](https://github.com/aws-samples/sagemaker-studio-image-build-cli). This CLI enables you to build a Dockerfile using AWS CodeBuild.

# Add a Docker Image Compatible with Amazon SageMaker Studio Classic to Amazon ECR
<a name="studio-byoi-sdk-add-container-image"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You perform the following steps to add a container image to Amazon ECR:
+ Create an Amazon ECR repository.
+ Authenticate to Amazon ECR.
+ Build a Docker image compatible with Studio Classic.
+ Push the image to the Amazon ECR repository.

**Note**  
The Amazon ECR repository must be in the same AWS Region as Studio Classic.

**To build and add a container image to Amazon ECR**

1. Create an Amazon ECR repository using the AWS CLI. To create the repository using the Amazon ECR console, see [Creating a repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html).

   ```
   aws ecr create-repository \
       --repository-name smstudio-custom \
       --image-scanning-configuration scanOnPush=true
   ```

   The response should look similar to the following.

   ```
   {
       "repository": {
           "repositoryArn": "arn:aws:ecr:us-east-2:acct-id:repository/smstudio-custom",
           "registryId": "acct-id",
           "repositoryName": "smstudio-custom",
           "repositoryUri": "acct-id.dkr.ecr.us-east-2.amazonaws.com/smstudio-custom",
           ...
       }
   }
   ```

1. Build the `Dockerfile` using the Studio Classic image build CLI. The period (.) specifies that the Dockerfile should be in the context of the build command. This command builds the image and uploads the built image to the ECR repo. It then outputs the image URI.

   ```
   sm-docker build . --repository smstudio-custom:custom
   ```

   The response should look similar to the following.

   ```
   Image URI: <acct-id>.dkr.ecr.<region>.amazonaws.com/<image_name>
   ```

# Create a Custom SageMaker Image for Amazon SageMaker Studio Classic
<a name="studio-byoi-create"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

This topic describes how you can create a custom SageMaker image using the SageMaker AI console or AWS CLI.

When you create an image from the console, SageMaker AI also creates an initial image version. The image version represents a container image in [Amazon Elastic Container Registry (ECR)](https://console.aws.amazon.com/ecr/). The container image must satisfy the requirements to be used in Amazon SageMaker Studio Classic. For more information, see [Custom SageMaker Image Specifications for Amazon SageMaker Studio Classic](studio-byoi-specs.md). For information on testing your image locally and resolving common issues, see the [SageMaker Studio Classic Custom Image Samples repo](https://github.com/aws-samples/sagemaker-studio-custom-image-samples/blob/main/DEVELOPMENT.md).

After you have created your custom SageMaker image, you must attach it to your domain or shared space to use it with Studio Classic. For more information, see [Attach a Custom SageMaker Image in Amazon SageMaker Studio Classic](studio-byoi-attach.md).

## Create a SageMaker image from the console
<a name="studio-byoi-create-console"></a>

The following section demonstrates how to create a custom SageMaker image from the SageMaker AI console.

**To create an image**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **Images**. 

1. On the **Custom images** page, choose **Create image**.

1. For **Image source**, enter the registry path to the container image in Amazon ECR. The path is in the following format:

   ` acct-id.dkr.ecr.region.amazonaws.com/repo-name[:tag] or [@digest] `

1. Choose **Next**.

1. Under **Image properties**, enter the following:
   + Image name – The name must be unique to your account in the current AWS Region.
   + (Optional) Display name – The name displayed in the Studio Classic user interface. When not provided, `Image name` is displayed.
   + (Optional) Description – A description of the image.
   + IAM role – The role must have the [AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) policy attached. Use the dropdown menu to choose one of the following options:
     + Create a new role – Specify any additional Amazon Simple Storage Service (Amazon S3) buckets that you want users of your notebooks to have access to. If you don't want to allow access to additional buckets, choose **None**.

       SageMaker AI attaches the `AmazonSageMakerFullAccess` policy to the role. The role allows users of your notebooks access to the S3 buckets listed next to the checkmarks.
     + Enter a custom IAM role ARN – Enter the Amazon Resource Name (ARN) of your IAM role.
     + Use existing role – Choose one of your existing roles from the list.
   + (Optional) Image tags – Choose **Add new tag**. You can add up to 50 tags. Tags are searchable using the Studio Classic user interface, the SageMaker AI console, or the SageMaker AI `Search` API.

1. Choose **Submit**.

The new image is displayed in the **Custom images** list and briefly highlighted. After the image has been successfully created, you can choose the image name to view its properties or choose **Create version** to create another version.

**To create another image version**

1. Choose **Create version** on the same row as the image.

1. For **Image source**, enter the registry path to the Amazon ECR container image. The container image shouldn't be the same image as used in a previous version of the SageMaker image.

## Create a SageMaker image from the AWS CLI
<a name="studio-byoi-sdk-create-image"></a>

You perform the following steps to create a SageMaker image from the container image using the AWS CLI.
+ Create an `Image`.
+ Create an `ImageVersion`.
+ Create a configuration file.
+ Create an `AppImageConfig`.

**To create the SageMaker image entities**

1. Create a SageMaker image.

   ```
   aws sagemaker create-image \
       --image-name custom-image \
       --role-arn arn:aws:iam::<acct-id>:role/service-role/<execution-role>
   ```

   The response should look similar to the following.

   ```
   {
       "ImageArn": "arn:aws:sagemaker:us-east-2:acct-id:image/custom-image"
   }
   ```

1. Create a SageMaker image version from the container image.

   ```
   aws sagemaker create-image-version \
       --image-name custom-image \
       --base-image <acct-id>.dkr.ecr.<region>.amazonaws.com/smstudio-custom:custom-image
   ```

   The response should look similar to the following.

   ```
   {
       "ImageVersionArn": "arn:aws:sagemaker:us-east-2:acct-id:image-version/custom-image/1"
   }
   ```

1. Check that the image version was successfully created.

   ```
   aws sagemaker describe-image-version \
       --image-name custom-image \
       --version-number 1
   ```

   The response should look similar to the following.

   ```
   {
       "ImageVersionArn": "arn:aws:sagemaker:us-east-2:acct-id:image-version/custom-image/1",
       "ImageVersionStatus": "CREATED"
   }
   ```
**Note**  
If the response is `"ImageVersionStatus": "CREATED_FAILED"`, the response also includes the failure reason. A permissions issue is a common cause of failure. You also can check your Amazon CloudWatch logs if you experience a failure when starting or running the KernelGateway app for a custom image. The name of the log group is `/aws/sagemaker/studio`. The name of the log stream is `$domainID/$userProfileName/KernelGateway/$appName`.

1. Create a configuration file, named `app-image-config-input.json`. The `Name` value of `KernelSpecs` must match the name of the kernelSpec available in the Image associated with this `AppImageConfig`. This value is case sensitive. You can find the available kernelSpecs in an image by running `jupyter-kernelspec list` from a shell inside the container. `MountPath` is the path within the image to mount your Amazon Elastic File System (Amazon EFS) home directory. It needs to be different from the path you use inside the container because that path will be overridden when your Amazon EFS home directory is mounted.
**Note**  
The following `DefaultUID` and `DefaultGID` combinations are the only accepted values:   
 DefaultUID: 1000 and DefaultGID: 100 
 DefaultUID: 0 and DefaultGID: 0 

   ```
   {
       "AppImageConfigName": "custom-image-config",
       "KernelGatewayImageConfig": {
           "KernelSpecs": [
               {
                   "Name": "python3",
                   "DisplayName": "Python 3 (ipykernel)"
               }
           ],
           "FileSystemConfig": {
               "MountPath": "/home/sagemaker-user",
               "DefaultUid": 1000,
               "DefaultGid": 100
           }
       }
   }
   ```

1. Create the AppImageConfig using the file created in the previous step.

   ```
   aws sagemaker create-app-image-config \
       --cli-input-json file://app-image-config-input.json
   ```

   The response should look similar to the following.

   ```
   {
       "AppImageConfigArn": "arn:aws:sagemaker:us-east-2:acct-id:app-image-config/custom-image-config"
   }
   ```

# Attach a Custom SageMaker Image in Amazon SageMaker Studio Classic
<a name="studio-byoi-attach"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

To use a custom SageMaker image, you must attach a version of the image to your domain or shared space. When you attach an image version, it appears in the SageMaker Studio Classic Launcher and is available in the **Select image** dropdown list, which users use to launch an activity or change the image used by a notebook.

To make a custom SageMaker image available to all users within a domain, you attach the image to the domain. To make an image available to all users within a shared space, you can attach the image to the shared space. To make an image available to a single user, you attach the image to the user's profile. When you attach an image, SageMaker AI uses the latest image version by default. You can also attach a specific image version. After you attach the version, you can choose the version from the SageMaker AI Launcher or the image selector when you launch a notebook.

There is a limit to the number of image versions that can be attached at any given time. After you reach the limit, you must detach a version in order to attach another version of the image.

The following sections demonstrate how to attach a custom SageMaker image to your domain using either the SageMaker AI console or the AWS CLI. You can only attach a custom image to a share space using the AWS CLI.

## Attach the SageMaker image to a domain
<a name="studio-byoi-attach-domain"></a>

### Attach the SageMaker image using the Console
<a name="studio-byoi-attach-existing"></a>

This topic describes how you can attach an existing custom SageMaker image version to your domain using the SageMaker AI control panel. You can also create a custom SageMaker image and image version, and then attach that version to your domain. For the procedure to create an image and image version, see [Create a Custom SageMaker Image for Amazon SageMaker Studio Classic](studio-byoi-create.md).

**To attach an existing image**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the **Domains** page, select the domain to attach the image to.

1. From the **Domain details** page, select the **Environment** tab.

1. On the **Environment** tab, under **Custom SageMaker Studio Classic images attached to domain**, choose **Attach image**.

1. For **Image source**, choose **Existing image**.

1. Choose an existing image from the list.

1. Choose a version of the image from the list.

1. Choose **Next**.

1. Verify the values for **Image name**, **Image display name**, and **Description**.

1. Choose the IAM role. For more information, see [Create a Custom SageMaker Image for Amazon SageMaker Studio Classic](studio-byoi-create.md).

1. (Optional) Add tags for the image.

1. Specify the EFS mount path. This is the path within the image to mount the user's Amazon Elastic File System (EFS) home directory.

1. For **Image type**, select **SageMaker Studio image**

1. For **Kernel name**, enter the name of an existing kernel in the image. For information on how to get the kernel information from the image, see [DEVELOPMENT](https://github.com/aws-samples/sagemaker-studio-custom-image-samples/blob/main/DEVELOPMENT.md) in the SageMaker Studio Classic Custom Image Samples repository. For more information, see the **Kernel discovery** and **User data** sections of [Custom SageMaker Image Specifications for Amazon SageMaker Studio Classic](studio-byoi-specs.md).

1. (Optional) For **Kernel display name**, enter the display name for the kernel.

1. Choose **Add kernel**.

1. Choose **Submit**. 

   1. Wait for the image version to be attached to the domain. When attached, the version is displayed in the **Custom images** list and briefly highlighted.

### Attach the SageMaker image using the AWS CLI
<a name="studio-byoi-sdk-attach"></a>

The following sections demonstrate how to attach a custom SageMaker image when creating a new domain or updating your existing domain using the AWS CLI.

#### Attach the SageMaker image to a new domain
<a name="studio-byoi-sdk-attach-new-domain"></a>

The following section demonstrates how to create a new domain with the version attached. These steps require that you specify the Amazon Virtual Private Cloud (VPC) information and execution role required to create the domain. You perform the following steps to create the domain and attach the custom SageMaker image:
+ Get your default VPC ID and subnet IDs.
+ Create the configuration file for the domain, which specifies the image.
+ Create the domain with the configuration file.

**To add the custom SageMaker image to your domain**

1. Get your default VPC ID.

   ```
   aws ec2 describe-vpcs \
       --filters Name=isDefault,Values=true \
       --query "Vpcs[0].VpcId" --output text
   ```

   The response should look similar to the following.

   ```
   vpc-xxxxxxxx
   ```

1. Get your default subnet IDs using the VPC ID from the previous step.

   ```
   aws ec2 describe-subnets \
       --filters Name=vpc-id,Values=<vpc-id> \
       --query "Subnets[*].SubnetId" --output json
   ```

   The response should look similar to the following.

   ```
   [
       "subnet-b55171dd",
       "subnet-8a5f99c6",
       "subnet-e88d1392"
   ]
   ```

1. Create a configuration file named `create-domain-input.json`. Insert the VPC ID, subnet IDs, `ImageName`, and `AppImageConfigName` from the previous steps. Because `ImageVersionNumber` isn't specified, the latest version of the image is used, which is the only version in this case.

   ```
   {
       "DomainName": "domain-with-custom-image",
       "VpcId": "<vpc-id>",
       "SubnetIds": [
           "<subnet-ids>"
       ],
       "DefaultUserSettings": {
           "ExecutionRole": "<execution-role>",
           "KernelGatewayAppSettings": {
               "CustomImages": [
                   {
                       "ImageName": "custom-image",
                       "AppImageConfigName": "custom-image-config"
                   }
               ]
           }
       },
       "AuthMode": "IAM"
   }
   ```

1. Create the domain with the attached custom SageMaker image.

   ```
   aws sagemaker create-domain \
       --cli-input-json file://create-domain-input.json
   ```

   The response should look similar to the following.

   ```
   {
       "DomainArn": "arn:aws:sagemaker:us-east-2:acct-id:domain/d-xxxxxxxxxxxx",
       "Url": "https://d-xxxxxxxxxxxx.studio.us-east-2.sagemaker.aws/..."
   }
   ```

#### Attach the SageMaker image to your current domain
<a name="studio-byoi-sdk-attach-current-domain"></a>

If you have onboarded to a SageMaker AI domain, you can attach the custom image to your current domain. For more information about onboarding to a SageMaker AI domain, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md). You don't need to specify the VPC information and execution role when attaching a custom image to your current domain. After you attach the version, you must delete all the apps in your domain and reopen Studio Classic. For information about deleting the apps, see [Delete an Amazon SageMaker AI domain](gs-studio-delete-domain.md).

You perform the following steps to add the SageMaker image to your current domain.
+ Get your `DomainID` from SageMaker AI control panel.
+ Use the `DomainID` to get the `DefaultUserSettings` for the domain.
+ Add the `ImageName` and `AppImageConfig` as a `CustomImage` to the `DefaultUserSettings`.
+ Update your domain to include the custom image.

**To add the custom SageMaker image to your domain**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the **Domains** page, select the domain to attach the image to.

1. From the **Domain details** page, select the **Domain settings** tab.

1. From the **Domain settings** tab, under **General settings**, find the `DomainId`. The ID is in the following format: `d-xxxxxxxxxxxx`.

1. Use the domain ID to get the description of the domain.

   ```
   aws sagemaker describe-domain \
       --domain-id <d-xxxxxxxxxxxx>
   ```

   The response should look similar to the following.

   ```
   {
       "DomainId": "d-xxxxxxxxxxxx",
       "DefaultUserSettings": {
         "KernelGatewayAppSettings": {
           "CustomImages": [
           ],
           ...
         }
       }
   }
   ```

1. Save the default user settings section of the response to a file named `default-user-settings.json`.

1. Insert the `ImageName` and `AppImageConfigName` from the previous steps as a custom image. Because `ImageVersionNumber` isn't specified, the latest version of the image is used, which is the only version in this case.

   ```
   {
       "DefaultUserSettings": {
           "KernelGatewayAppSettings": { 
              "CustomImages": [ 
                 { 
                    "ImageName": "string",
                    "AppImageConfigName": "string"
                 }
              ],
              ...
           }
       }
   }
   ```

1. Use the domain ID and default user settings file to update your domain.

   ```
   aws sagemaker update-domain \
       --domain-id <d-xxxxxxxxxxxx> \
       --cli-input-json file://default-user-settings.json
   ```

   The response should look similar to the following.

   ```
   {
       "DomainArn": "arn:aws:sagemaker:us-east-2:acct-id:domain/d-xxxxxxxxxxxx"
   }
   ```

## Attach the SageMaker image to a shared space
<a name="studio-byoi-attach-shared-space"></a>

You can only attach the SageMaker image to a shared space using the AWS CLI. After you attach the version, you must delete all of the applications in your shared space and reopen Studio Classic. For information about deleting the apps, see [Delete an Amazon SageMaker AI domain](gs-studio-delete-domain.md).

You perform the following steps to add the SageMaker image to a shared space.
+ Get your `DomainID` from SageMaker AI control panel.
+ Use the `DomainID` to get the `DefaultSpaceSettings` for the domain.
+ Add the `ImageName` and `AppImageConfig` as a `CustomImage` to the `DefaultSpaceSettings`.
+ Update your domain to include the custom image for the shared space.

**To add the custom SageMaker image to your shared space**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the **Domains** page, select the domain to attach the image to.

1. From the **Domain details** page, select the **Domain settings** tab.

1. From the **Domain settings** tab, under **General settings**, find the `DomainId`. The ID is in the following format: `d-xxxxxxxxxxxx`.

1. Use the domain ID to get the description of the domain.

   ```
   aws sagemaker describe-domain \
       --domain-id <d-xxxxxxxxxxxx>
   ```

   The response should look similar to the following.

   ```
   {
       "DomainId": "d-xxxxxxxxxxxx",
       ...
       "DefaultSpaceSettings": {
         "KernelGatewayAppSettings": {
           "CustomImages": [
           ],
           ...
         }
       }
   }
   ```

1. Save the default space settings section of the response to a file named `default-space-settings.json`.

1. Insert the `ImageName` and `AppImageConfigName` from the previous steps as a custom image. Because `ImageVersionNumber` isn't specified, the latest version of the image is used, which is the only version in this case.

   ```
   {
       "DefaultSpaceSettings": {
           "KernelGatewayAppSettings": { 
              "CustomImages": [ 
                 { 
                    "ImageName": "string",
                    "AppImageConfigName": "string"
                 }
              ],
              ...
           }
       }
   }
   ```

1. Use the domain ID and default space settings file to update your domain.

   ```
   aws sagemaker update-domain \
       --domain-id <d-xxxxxxxxxxxx> \
       --cli-input-json file://default-space-settings.json
   ```

   The response should look similar to the following.

   ```
   {
       "DomainArn": "arn:aws:sagemaker:us-east-2:acct-id:domain/d-xxxxxxxxxxxx"
   }
   ```

## View the attached image in SageMaker AI
<a name="studio-byoi-sdk-view"></a>

After you create the custom SageMaker image and attach it to your domain, the image appears in the **Environment** tab of the domain. You can only view the attached images for shared spaces using the AWS CLI by using the following command.

```
aws sagemaker describe-domain \
    --domain-id <d-xxxxxxxxxxxx>
```

# Launch a Custom SageMaker Image in Amazon SageMaker Studio Classic
<a name="studio-byoi-launch"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

After you create your custom SageMaker image and attach it to your domain or shared space, the custom image and kernel appear in selectors in the **Change environment** dialog box of the Studio Classic Launcher.

**To launch and select your custom image and kernel**

1. In Amazon SageMaker Studio Classic, open the Launcher. To open the Launcher, choose **Amazon SageMaker Studio Classic** at the top left of the Studio Classic interface or use the keyboard shortcut `Ctrl + Shift + L`.

   To learn about all the available ways to open the Launcher, see [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md)  
![\[SageMaker Studio Classic launcher.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-new-launcher.png)

1. In the Launcher, in the **Notebooks and compute resources** section, choose **Change environment**.

1. In the **Change environment** dialog, use the dropdown menus to select your **Image** from the **Custom Image** section, and your **Kernel**, then choose **Select**.

1. In the Launcher, choose **Create notebook** or **Open image terminal**. Your notebook or terminal launches in the selected custom image and kernel.

To change your image or kernel in an open notebook, see [Change the Image or a Kernel for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-change-image.md).

**Note**  
If you encounter an error when launching the image, check your Amazon CloudWatch logs. The name of the log group is `/aws/sagemaker/studio`. The name of the log stream is `$domainID/$userProfileName/KernelGateway/$appName`.

# Clean Up Resources for Custom Images in Amazon SageMaker Studio Classic
<a name="studio-byoi-cleanup"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following sections show how to clean up the resources you created in the previous sections from the SageMaker AI console or AWS CLI. You perform the following steps to clean up the resources:
+ Detach the image and image versions from your domain.
+ Delete the image, image version, and app image config.
+ Delete the container image and repository from Amazon ECR. For more information, see [Deleting a repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-delete.html).

## Clean up resources from the SageMaker AI console
<a name="studio-byoi-detach"></a>

The following section shows how to clean up resources from the SageMaker AI console.

When you detach an image from a domain, all versions of the image are detached. When an image is detached, all users of the domain lose access to the image versions. A running notebook that has a kernel session on an image version when the version is detached, continues to run. When the notebook is stopped or the kernel is shut down, the image version becomes unavailable.

**To detach an image**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **Images**. 

1. Under **Custom SageMaker Studio Classic images attached to domain**, choose the image and then choose **Detach**.

1. (Optional) To delete the image and all versions from SageMaker AI, select **Also delete the selected images ...**. This does not delete the associated container images from Amazon ECR.

1. Choose **Detach**.

## Clean up resources from the AWS CLI
<a name="studio-byoi-sdk-cleanup"></a>

The following section shows how to clean up resources from the AWS CLI.

**To clean up resources**

1. Detach the image and image versions from your domain by passing an empty custom image list to the domain. Open the `default-user-settings.json` file you created in [Attach the SageMaker image to your current domain](studio-byoi-attach.md#studio-byoi-sdk-attach-current-domain). To detach the image and image version from a shared space, open the `default-space-settings.json` file.

1. Delete the custom images and then save the file.

   ```
   "DefaultUserSettings": {
     "KernelGatewayAppSettings": {
        "CustomImages": [
        ],
        ...
     },
     ...
   }
   ```

1. Use the domain ID and default user settings file to update your domain. To update your shared space, use the default space settings file.

   ```
   aws sagemaker update-domain \
       --domain-id <d-xxxxxxxxxxxx> \
       --cli-input-json file://default-user-settings.json
   ```

   The response should look similar to the following.

   ```
   {
       "DomainArn": "arn:aws:sagemaker:us-east-2:acct-id:domain/d-xxxxxxxxxxxx"
   }
   ```

1. Delete the app image config.

   ```
   aws sagemaker delete-app-image-config \
       --app-image-config-name custom-image-config
   ```

1. Delete the SageMaker image, which also deletes all image versions. The container images in ECR that are represented by the image versions are not deleted.

   ```
   aws sagemaker delete-image \
       --image-name custom-image
   ```

# Use Lifecycle Configurations to Customize Amazon SageMaker Studio Classic
<a name="studio-lcc"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic triggers lifecycle configurations shell scripts during important lifecycle events, such as starting a new Studio Classic notebook. You can use lifecycle configurations to automate customization for your Studio Classic environment. This customization includes installing custom packages, configuring notebook extensions, preloading datasets, and setting up source code repositories.

Using lifecycle configurations gives you flexibility and control to configure Studio Classic to meet your specific needs. For example, you can use customized container images with lifecycle configuration scripts to modify your environment. First, create a minimal set of base container images, then install the most commonly used packages and libraries in those images. After you have completed your images, use lifecycle configurations to install additional packages for specific use cases. This gives you the flexibility to modify your environment across your data science and machine learning teams based on need.

Users can only select lifecycle configuration scripts that they are given access to. While you can give access to multiple lifecycle configuration scripts, you can also set default lifecycle configuration scripts for resources. Based on the resource that the default lifecycle configuration is set for, the default either runs automatically or is the first option shown.

For example lifecycle configuration scripts, see the [Studio Classic Lifecycle Configuration examples GitHub repository](https://github.com/aws-samples/sagemaker-studio-lifecycle-config-examples). For a blog on implementing lifecycle configuration, see [Customize Amazon SageMaker Studio Classic using Lifecycle Configurations](https://aws.amazon.com/blogs/machine-learning/customize-amazon-sagemaker-studio-using-lifecycle-configurations/).

**Note**  
Each script has a limit of **16384 characters**.

**Topics**
+ [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md)
+ [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md)
+ [Debug Lifecycle Configurations in Amazon SageMaker Studio Classic](studio-lcc-debug.md)
+ [Update and Detach Lifecycle Configurations in Amazon SageMaker Studio Classic](studio-lcc-delete.md)

# Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic
<a name="studio-lcc-create"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker AI provides interactive applications that enable Studio Classic's visual interface, code authoring, and run experience. This series shows how to create a lifecycle configuration and associate it with a SageMaker AI domain.

Application types can be either `JupyterServer` or `KernelGateway`. 
+ **`JupyterServer` applications:** This application type enables access to the visual interface for Studio Classic. Every user and shared space in Studio Classic gets its own JupyterServer application.
+ **`KernelGateway` applications:** This application type enables access to the code run environment and kernels for your Studio Classic notebooks and terminals. For more information, see [Jupyter Kernel Gateway](https://jupyter-kernel-gateway.readthedocs.io/en/latest/).

For more information about Studio Classic's architecture and Studio Classic applications, see [Use Amazon SageMaker Studio Classic Notebooks](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks.html).

**Topics**
+ [Create a Lifecycle Configuration from the AWS CLI for Amazon SageMaker Studio Classic](studio-lcc-create-cli.md)
+ [Create a Lifecycle Configuration from the SageMaker AI Console for Amazon SageMaker Studio Classic](studio-lcc-create-console.md)

# Create a Lifecycle Configuration from the AWS CLI for Amazon SageMaker Studio Classic
<a name="studio-lcc-create-cli"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topic shows how to create a lifecycle configuration using the AWS CLI to automate customization for your Studio Classic environment.

## Prerequisites
<a name="studio-lcc-create-cli-prerequisites"></a>

Before you begin, complete the following prerequisites: 
+ Update the AWS CLI by following the steps in [Installing the current AWS CLI Version](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html#install-tool-bundled).
+ From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html). 
+ Onboard to SageMaker AI domain by following the steps in [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

## Step 1: Create a lifecycle configuration
<a name="studio-lcc-create-cli-step1"></a>

The following procedure shows how to create a lifecycle configuration script that prints `Hello World`.

**Note**  
Each script can have up to **16,384 characters**.

1. From your local machine, create a file named `my-script.sh` with the following content.

   ```
   #!/bin/bash
   set -eux
   echo 'Hello World!'
   ```

1. Convert your `my-script.sh` file into base64 format. This requirement prevents errors that occur from spacing and line break encoding.

   ```
   LCC_CONTENT=`openssl base64 -A -in my-script.sh`
   ```

1. Create a lifecycle configuration for use with Studio Classic. The following command creates a lifecycle configuration that runs when you launch an associated `KernelGateway` application. 

   ```
   aws sagemaker create-studio-lifecycle-config \
   --region region \
   --studio-lifecycle-config-name my-studio-lcc \
   --studio-lifecycle-config-content $LCC_CONTENT \
   --studio-lifecycle-config-app-type KernelGateway
   ```

   Note the ARN of the newly created lifecycle configuration that is returned. This ARN is required to attach the lifecycle configuration to your application.

## Step 2: Attach the lifecycle configuration to your domain, user profile, or shared space
<a name="studio-lcc-create-cli-step2"></a>

To attach the lifecycle configuration, you must update the `UserSettings` for your domain or user profile, or the `SpaceSettings` for a shared space. Lifecycle configuration scripts that are associated at the domain level are inherited by all users. However, scripts that are associated at the user profile level are scoped to a specific user, while scripts that are associated at the shared space level are scoped to the shared space. 

The following example shows how to create a new user profile with the lifecycle configuration attached. You can also create a new domain or space with a lifecycle configuration attached by using the [create-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-domain.html) and [create-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-space.html) commands, respectively.

Add the lifecycle configuration ARN from the previous step to the settings for the appropriate app type. For example, place it in the `JupyterServerAppSettings` of the user. You can add multiple lifecycle configurations at the same time by passing a list of lifecycle configurations. When a user launches a JupyterServer application with the AWS CLI, they can pass a lifecycle configuration to use instead of the default. The lifecycle configuration that the user passes must belong to the list of lifecycle configurations in `JupyterServerAppSettings`.

```
# Create a new UserProfile
aws sagemaker create-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"JupyterServerAppSettings": {
  "LifecycleConfigArns":
    [lifecycle-configuration-arn-list]
  }
}'
```

The following example shows how to update an existing shared space to attach the lifecycle configuration. You can also update an existing domain or user profile with a lifecycle configuration attached by using the [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html) or [update-user-profile](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-user-profile.html) command. When you update the list of lifecycle configurations attached, you must pass all lifecycle configurations as part of the list. If a lifecycle configuration is not part of this list, it will not be attached to the application.

```
aws sagemaker update-space --domain-id domain-id \
--space-name space-name \
--region region \
--space-settings '{
"JupyterServerAppSettings": {
  "LifecycleConfigArns":
    [lifecycle-configuration-arn-list]
  }
}'
```

For information about setting a default lifecycle configuration for a resource, see [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md).

## Step 3: Launch application with lifecycle configuration
<a name="studio-lcc-create-cli-step3"></a>

After you attach a lifecycle configuration to a domain, user profile, or space, the user can select it when launching an application with the AWS CLI. This section describes how to launch an application with an attached lifecycle configuration. For information about changing the default lifecycle configuration after launching a JupyterServer application, see [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md).

Launch the desired application type using the `create-app` command and specify the lifecycle configuration ARN in the `resource-spec` argument. 
+ The following example shows how to create a `JupyterServer` application with an associated lifecycle configuration. When creating the `JupyterServer`, the `app-name` must be `default`. The lifecycle configuration ARN passed as part of the `resource-spec` parameter must be part of the list of lifecycle configuration ARNs specified in `UserSettings` for your domain or user profile, or `SpaceSettings` for a shared space.

  ```
  aws sagemaker create-app --domain-id domain-id \
  --region region \
  --user-profile-name user-profile-name \
  --app-type JupyterServer \
  --resource-spec LifecycleConfigArn=lifecycle-configuration-arn \
  --app-name default
  ```
+ The following example shows how to create a `KernelGateway` application with an associated lifecycle configuration.

  ```
  aws sagemaker create-app --domain-id domain-id \
  --region region \
  --user-profile-name user-profile-name \
  --app-type KernelGateway \
  --resource-spec LifecycleConfigArn=lifecycle-configuration-arn,SageMakerImageArn=sagemaker-image-arn,InstanceType=instance-type \
  --app-name app-name
  ```

# Create a Lifecycle Configuration from the SageMaker AI Console for Amazon SageMaker Studio Classic
<a name="studio-lcc-create-console"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topic shows how to create a lifecycle configuration from the Amazon SageMaker AI console to automate customization for your Studio Classic environment.

## Prerequisites
<a name="studio-lcc-create-console-prerequisites"></a>

Before you can begin this tutorial, complete the following prerequisite:
+ Onboard to Amazon SageMaker Studio Classic. For more information, see [Onboard to Amazon SageMaker Studio Classic](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html).

## Step 1: Create a new lifecycle configuration
<a name="studio-lcc-create-console-step1"></a>

You can create a lifecycle configuration by entering a script from the Amazon SageMaker AI console.

**Note**  
Each script can have up to **16,384 characters**.

The following procedure shows how to create a lifecycle configuration script that prints `Hello World`.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **Lifecycle configurations**. 

1. Choose the **Studio** tab.

1. Choose **Create configuration**.

1. Under **Select configuration type**, select the type of application that the lifecycle configuration should be attached to. For more information about selecting which application to attach the lifecycle configuration to, see [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md).

1. Choose **Next**.

1. In the section called **Configuration settings**, enter a name for your lifecycle configuration.

1. In the **Scripts** section, enter the following content.

   ```
   #!/bin/bash
   set -eux
   echo 'Hello World!'
   ```

1. (Optional) Create a tag for your lifecycle configuration.

1. Choose **Submit**.

## Step 2: Attach the lifecycle configuration to a domain or user profile
<a name="studio-lcc-create-console-step2"></a>

Lifecycle configuration scripts associated at the domain level are inherited by all users. However, scripts that are associated at the user profile level are scoped to a specific user. 

You can attach multiple lifecycle configurations to a domain or user profile for both JupyterServer and KernelGateway applications.

**Note**  
To attach a lifecycle configuration to a shared space, you must use the AWS CLI. For more information, see [Create a Lifecycle Configuration from the AWS CLI for Amazon SageMaker Studio Classic](studio-lcc-create-cli.md).

The following sections show how to attach a lifecycle configuration to your domain or user profile.

### Attach to a domain
<a name="studio-lcc-create-console-step2-domain"></a>

The following shows how to attach a lifecycle configuration to your existing domain from the SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the list of domains, select the domain to attach the lifecycle configuration to.

1. From the **Domain details**, choose the **Environment** tab.

1. Under **Lifecycle configurations for personal Studio apps**, choose **Attach**.

1. Under **Source**, choose **Existing configuration**.

1. Under **Studio lifecycle configurations**, select the lifecycle configuration that you created in the previous step.

1. Select **Attach to domain**.

### Attach to your user profile
<a name="studio-lcc-create-console-step2-userprofile"></a>

The following shows how to attach a lifecycle configuration to your existing user profile.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the list of domains, select the domain that contains the user profile to attach the lifecycle configuration to.

1. Under **User profiles**, select the user profile.

1. From the **User Details** page, choose **Edit**.

1. On the left navigation, choose **Studio settings**.

1. Under **Lifecycle configurations attached to user**, choose **Attach**.

1. Under **Source**, choose **Existing configuration**.

1. Under **Studio lifecycle configurations**, select the lifecycle configuration that you created in the previous step.

1. Choose **Attach to user profile**.

## Step 3: Launch an application with the lifecycle configuration
<a name="studio-lcc-create-console-step3"></a>

After you attach a lifecycle configuration to a domain or user profile, you can launch an application with that attached lifecycle configuration. Choosing which lifecycle configuration to launch with depends on the application type.
+ **JupyterServer**: When launching a JupyterServer application from the console, SageMaker AI always uses the default lifecycle configuration. You can't use a different lifecycle configuration when launching from the console. For information about changing the default lifecycle configuration after launching a JupyterServer application, see [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md).

  To select a different attached lifecycle configuration, you must launch with the AWS CLI. For more information about launching a JupyterServer application with an attached lifecycle configuration from the AWS CLI, see [Create a Lifecycle Configuration from the AWS CLI for Amazon SageMaker Studio Classic](studio-lcc-create-cli.md).
+ **KernelGateway**: You can select any of the attached lifecycle configurations when launching a KernelGateway application using the Studio Classic Launcher.

The following procedure describes how to launch a KernelGateway application with an attached lifecycle configuration from the SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Launch Studio Classic. For more information, see [Launch Amazon SageMaker Studio Classic](studio-launch.md).

1. In the Studio Classic UI, open the Studio Classic Launcher. For more information, see [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md). 

1. In the Studio Classic Launcher, navigate to the **Notebooks and compute resources** section. 

1. Click the **Change environment** button.

1. On the **Change environment** dialog, use the dropdown menus to select your **Image**, **Kernel**, **Instance type**, and a **Start-up script**. If there is no default lifecycle configuration, the **Start-up script** value defaults to `No script`. Otherwise, the **Start-up script** value is your default lifecycle configuration. After you select a lifecycle configuration, you can view the entire script.

1. Click **Select**.

1. Back to the Launcher, click the **Create notebook** to launch a new notebook kernel with your selected image and lifecycle configuration.

## Step 4: View logs for a lifecycle configuration
<a name="studio-lcc-create-console-step4"></a>

You can view the logs for your lifecycle configuration after it has been attached to a domain or user profile. 

1. First, provide access to CloudWatch for your AWS Identity and Access Management (IAM) role. Add read permissions for the following log group and log stream.
   + **Log group:**`/aws/sagemaker/studio`
   + **Log stream:**`domain/user-profile/app-type/app-name/LifecycleConfigOnStart`

    For information about adding permissions, see [Enabling logging from certain AWS services](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html).

1. From within Studio Classic, navigate to the **Running Terminals and Kernels** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/running-terminals-kernels.png)) to monitor your lifecycle configuration.

1. Select an application from the list of running applications. Applications with attached lifecycle configurations have an attached indicator icon ![\[Code brackets symbol representing programming or markup languages.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/studio-lcc-indicator-icon.png).

1. Select the indicator icon for your application. This opens a new panel that lists the lifecycle configuration.

1. From the new panel, select `View logs`. This opens a new tab that displays the logs.

# Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic
<a name="studio-lcc-defaults"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Although you can attach multiple lifecycle configuration scripts to a single resource, you can only set one default lifecycle configuration for each JupyterServer or KernelGateway application. The behavior of the default lifecycle configuration depends on whether it is set for JupyterServer or KernelGateway apps. 
+ **JupyterServer apps:** When set as the default lifecycle configuration script for JupyterServer apps, the lifecycle configuration script runs automatically when the user signs in to Studio Classic for the first time or restarts Studio Classic. Use this default lifecycle configuration to automate one-time setup actions for the Studio Classic developer environment, such as installing notebook extensions or setting up a GitHub repo. For an example of this, see [Customize Amazon SageMaker Studio using Lifecycle Configurations](https://aws.amazon.com/blogs/machine-learning/customize-amazon-sagemaker-studio-using-lifecycle-configurations/).
+ **KernelGateway apps:** When set as the default lifecycle configuration script for KernelGateway apps, the lifecycle configuration is selected by default in the Studio Classic launcher. Users can launch a notebook or terminal with the default script selected, or they can select a different one from the list of lifecycle configurations.

SageMaker AI supports setting a default lifecycle configuration for the following resources:
+ Domains
+ User profiles
+ Shared spaces

While domains and user profiles support setting a default lifecycle configuration from both the Amazon SageMaker AI console and AWS Command Line Interface, shared spaces only support setting a default lifecycle configuration from the AWS CLI.

You can set a lifecycle configuration as the default when creating a new resource or updating an existing resource. The following topics demonstrate how to set a default lifecycle configuration using the SageMaker AI console and AWS CLI.

## Default lifecycle configuration inheritance
<a name="studio-lcc-defaults-inheritance"></a>

Default lifecycle configurations set at the *domain* level are inherited by all users and shared spaces. Default lifecycle configurations set at the *user* and *shared space* level are scoped to only that user or shared space. User and space defaults override defaults set at the domain level.

A default KernelGateway lifecycle configuration set for a domain applies to all KernelGateway applications launched in the domain. Unless the user selects a different lifecycle configuration from the list presented in the Studio Classic launcher, the default lifecycle configuration is used. The default script also runs if `No Script` is selected by the user. For more information about selecting a script, see [Step 3: Launch an application with the lifecycle configuration](studio-lcc-create-console.md#studio-lcc-create-console-step3).

**Topics**
+ [Default lifecycle configuration inheritance](#studio-lcc-defaults-inheritance)
+ [Set Defaults from the AWS CLI for Amazon SageMaker Studio Classic](studio-lcc-defaults-cli.md)
+ [Set Defaults from the SageMaker AI Console for Amazon SageMaker Studio Classic](studio-lcc-defaults-console.md)

# Set Defaults from the AWS CLI for Amazon SageMaker Studio Classic
<a name="studio-lcc-defaults-cli"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can set default lifecycle configuration scripts from the AWS CLI for the following resources:
+ Domains
+ User profiles
+ Shared spaces

The following sections outline how to set default lifecycle configuration scripts from the AWS CLI.

**Topics**
+ [Prerequisites](#studio-lcc-defaults-cli-prereq)
+ [Set a default lifecycle configuration when creating a new resource](#studio-lcc-defaults-cli-new)
+ [Set a default lifecycle configuration for an existing resource](#studio-lcc-defaults-cli-existing)

## Prerequisites
<a name="studio-lcc-defaults-cli-prereq"></a>

Before you begin, complete the following prerequisites:
+ Update the AWS CLI by following the steps in [Installing the current AWS CLI version](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html#install-tool-bundled).
+ From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html). 
+ Onboard to SageMaker AI domain by following the steps in [Amazon SageMaker AI domain overview](gs-studio-onboard.md).
+ Create a lifecycle configuration following the steps in [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md).

## Set a default lifecycle configuration when creating a new resource
<a name="studio-lcc-defaults-cli-new"></a>

To set a default lifecycle configuration when creating a new domain, user profile, or space, pass the ARN of your previously created lifecycle configuration as part of one of the following AWS CLI commands:
+ [create-user-profile](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-user-profile.html)
+ [create-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/opensearch/create-domain.html)
+ [create-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/create-space.html)

You must pass the lifecycle configuration ARN for the following values in the KernelGateway or JupyterServer default settings:
+ `DefaultResourceSpec`:`LifecycleConfigArn` - This specifies the default lifecycle configuration for the application type.
+ `LifecycleConfigArns` - This is the list of all lifecycle configurations attached to the application type. The default lifecycle configuration must also be part of this list.

For example, the following API call creates a new user profile with a default lifecycle configuration.

```
aws sagemaker create-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"KernelGatewayAppSettings": {
    "DefaultResourceSpec": { 
            "InstanceType": "ml.t3.medium",
            "LifecycleConfigArn": "lifecycle-configuration-arn"
         },
    "LifecycleConfigArns": [lifecycle-configuration-arn-list]
  }
}'
```

## Set a default lifecycle configuration for an existing resource
<a name="studio-lcc-defaults-cli-existing"></a>

To set or update the default lifecycle configuration for an existing resource, pass the ARN of your previously created lifecycle configuration as part of one of the following AWS CLI commands:
+ [update-user-profile](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-user-profile.html)
+ [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html)
+ [update-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-space.html)

You must pass the lifecycle configuration ARN for the following values in the KernelGateway or JupyterServer default settings:
+ `DefaultResourceSpec`:`LifecycleConfigArn` - This specifies the default lifecycle configuration for the application type.
+ `LifecycleConfigArns` - This is the list of all lifecycle configurations attached to the application type. The default lifecycle configuration must also be part of this list.

For example, the following API call updates a user profile with a default lifecycle configuration.

```
aws sagemaker update-user-profile --domain-id domain-id \
--user-profile-name user-profile-name \
--region region \
--user-settings '{
"KernelGatewayAppSettings": {
    "DefaultResourceSpec": {
            "InstanceType": "ml.t3.medium",
            "LifecycleConfigArn": "lifecycle-configuration-arn"
         },
    "LifecycleConfigArns": [lifecycle-configuration-arn-list]
  }
}'
```

The following API call updates a domain to set a new default lifecycle configuration.

```
aws sagemaker update-domain --domain-id domain-id \
--region region \
--default-user-settings '{
"JupyterServerAppSettings": {
    "DefaultResourceSpec": {
            "InstanceType": "system",
            "LifecycleConfigArn": "lifecycle-configuration-arn"
         },
    "LifecycleConfigArns": [lifecycle-configuration-arn-list]
  }
}'
```

# Set Defaults from the SageMaker AI Console for Amazon SageMaker Studio Classic
<a name="studio-lcc-defaults-console"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can set default lifecycle configuration scripts from the SageMaker AI console for the following resources.
+ Domains
+ User profiles

You cannot set default lifecycle configuration scripts for shared spaces from the SageMaker AI console. For information about setting defaults for shared spaces, see [Set Defaults from the AWS CLI for Amazon SageMaker Studio Classic](studio-lcc-defaults-cli.md).

The following sections outline how to set default lifecycle configuration scripts from the SageMaker AI console.

**Topics**
+ [Prerequisites](#studio-lcc-defaults-cli-prerequisites)
+ [Set a default lifecycle configuration for a domain](#studio-lcc-defaults-cli-domain)
+ [Set a default lifecycle configuration for a user profile](#studio-lcc-defaults-cli-user-profile)

## Prerequisites
<a name="studio-lcc-defaults-cli-prerequisites"></a>

Before you begin, complete the following prerequisites:
+ Onboard to SageMaker AI domain by following the steps in [Amazon SageMaker AI domain overview](gs-studio-onboard.md).
+ Create a lifecycle configuration following the steps in [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md).

## Set a default lifecycle configuration for a domain
<a name="studio-lcc-defaults-cli-domain"></a>

The following procedure shows how to set a default lifecycle configuration for a domain from the SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the list of domains, select the name of the domain to set the default lifecycle configuration for.

1. From the **Domain details** page, choose the **Environment** tab.

1. Under **Lifecycle configurations for personal Studio apps**, select the lifecycle configuration that you want to set as the default for the domain. You can set distinct defaults for JupyterServer and KernelGateway applications.

1. Choose **Set as default**. This opens a pop up window that lists the current defaults for JupyterServer and KernelGateway applications.

1. Choose **Set as default** to set the lifecycle configuration as the default for its respective application type.

## Set a default lifecycle configuration for a user profile
<a name="studio-lcc-defaults-cli-user-profile"></a>

The following procedure shows how to set a default lifecycle configuration for a user profile from the SageMaker AI console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the list of domains, select the name of the domain that contains the user profile that you want to set the default lifecycle configuration for.

1. From the **Domain details** page, choose the **User profiles** tab.

1. Select the name of the user profile to set the default lifecycle configuration for. This opens a **User Details** page.

1. From the **User Details** page, choose **Edit**. This opens an **Edit user profile** page.

1. From the **Edit user profile** page, choose **Step 2 Studio settings**.

1. Under **Lifecycle configurations attached to user**, select the lifecycle configuration that you want to set as the default for the user profile. You can set distinct defaults for JupyterServer and KernelGateway applications.

1. Choose **Set as default**. This opens a pop up window that lists the current defaults for JupyterServer and KernelGateway applications.

1. Choose **Set as default** to set the lifecycle configuration as the default for its respective application type.

# Debug Lifecycle Configurations in Amazon SageMaker Studio Classic
<a name="studio-lcc-debug"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topics show how to get information about and debug your lifecycle configurations.

**Topics**
+ [Verify lifecycle configuration process from CloudWatch Logs](#studio-lcc-debug-logs)
+ [JupyterServer app failure](#studio-lcc-debug-jupyterserver)
+ [KernelGateway app failure](#studio-lcc-debug-kernel)
+ [Lifecycle configuration timeout](#studio-lcc-debug-timeout)

## Verify lifecycle configuration process from CloudWatch Logs
<a name="studio-lcc-debug-logs"></a>

Lifecycle configurations only log `STDOUT` and `STDERR`.

`STDOUT` is the default output for bash scripts. You can write to `STDERR` by appending `>&2` to the end of a bash command. For example, `echo 'hello'>&2`. 

Logs for your lifecycle configurations are published to your AWS account using Amazon CloudWatch. These logs can be found in the `/aws/sagemaker/studio` log stream in the CloudWatch console.

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Choose **Logs** from the left side. From the dropdown menu, select **Log groups**.

1. On the **Log groups** page, search for `aws/sagemaker/studio`. 

1. Select the log group.

1. On the **Log group details** page, choose the **Log streams** tab.

1. To find the logs for a specific app, search the log streams using the following format:

   ```
   domain-id/space-name/app-type/default/LifecycleConfigOnStart
   ```

   For example, to find the lifecycle configuration logs for domain `d-m85lcu8vbqmz`, space name `i-sonic-js`, and application type `JupyterLab`, use the following search string:

   ```
   d-m85lcu8vbqmz/i-sonic-js/JupyterLab/default/LifecycleConfigOnStart
   ```

## JupyterServer app failure
<a name="studio-lcc-debug-jupyterserver"></a>

If your JupyterServer app crashes because of an issue with the attached lifecycle configuration, Studio Classic displays the following error message on the Studio Classic startup screen. 

```
Failed to create SageMaker Studio due to start-up script failure
```

Select the `View script logs` link to view the CloudWatch logs for your JupyterServer app.

In the case where the faulty lifecycle configuration is specified in the `DefaultResourceSpec` of your domain, user profile, or shared space, Studio Classic continues to use the lifecycle configuration even after restarting Studio Classic. 

To resolve this error, follow the steps in [Set Default Lifecycle Configurations for Amazon SageMaker Studio Classic](studio-lcc-defaults.md) to remove the lifecycle configuration script from the `DefaultResourceSpec` or select another script as the default. Then launch a new JupyterServer app.

## KernelGateway app failure
<a name="studio-lcc-debug-kernel"></a>

If your KernelGateway app crashes because of an issue with the attached lifecycle configuration, Studio Classic displays the error message in your Studio Classic Notebook. 

Choose `View script logs` to view the CloudWatch logs for your KernelGateway app.

In this case, your lifecycle configuration is specified in the Studio Classic Launcher when launching a new Studio Classic Notebook. 

To resolve this error, use the Studio Classic launcher to select a different lifecycle configuration or select `No script`.

**Note**  
A default KernelGateway lifecycle configuration specified in `DefaultResourceSpec` applies to all KernelGateway images in the domain, user profile, or shared space unless the user selects a different script from the list presented in the Studio Classic launcher. The default script also runs if `No Script` is selected by the user. For more information on selecting a script, see [Step 3: Launch an application with the lifecycle configuration](studio-lcc-create-console.md#studio-lcc-create-console-step3).

## Lifecycle configuration timeout
<a name="studio-lcc-debug-timeout"></a>

There is a lifecycle configuration timeout limitation of 5 minutes. If a lifecycle configuration script takes longer than 5 minutes to run, Studio Classic throws an error.

To resolve this error, ensure that your lifecycle configuration script completes in less than 5 minutes. 

To help decrease the run time of scripts, try the following:
+ Cut down on necessary steps. For example, limit which conda environments to install large packages in.
+ Run tasks in parallel processes.
+ Use the `nohup` command in your script to ensure that hangup signals are ignored and do not stop the execution of the script.

# Update and Detach Lifecycle Configurations in Amazon SageMaker Studio Classic
<a name="studio-lcc-delete"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

A lifecycle configuration script can't be changed after it's created. To update your script, you must create a new lifecycle configuration script and attach it to the respective domain, user profile, or shared space. For more information about creating and attaching the lifecycle configuration, see [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md).

The following topic shows how to detach a lifecycle configuration using the AWS CLI and SageMaker AI console.

**Topics**
+ [Prerequisites](#studio-lcc-delete-pre)
+ [Detach using the AWS CLI](#studio-lcc-delete-cli)

## Prerequisites
<a name="studio-lcc-delete-pre"></a>

Before detaching a lifecycle configuration, you must complete the following prerequisite.
+ To successfully detach a lifecycle configuration, no running application can be using the lifecycle configuration. You must first shut down the running applications as shown in [Shut Down and Update Amazon SageMaker Studio Classic and Apps](studio-tasks-update.md).

## Detach using the AWS CLI
<a name="studio-lcc-delete-cli"></a>

To detach a lifecycle configuration using the AWS CLI, remove the desired lifecycle configuration from the list of lifecycle configurations attached to the resource and pass the list as part of the respective command:
+ [update-user-profile](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-user-profile.html)
+ [update-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-domain.html)
+ [update-space](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-space.html)

For example, the following command removes all lifecycle configurations for KernelGateways attached to the domain.

```
aws sagemaker update-domain --domain-id domain-id \
--region region \
--default-user-settings '{
"KernelGatewayAppSettings": {
  "LifecycleConfigArns":
    []
  }
}'
```

# Attach Suggested Git Repos to Amazon SageMaker Studio Classic
<a name="studio-git-attach"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic offers a Git extension for you to enter the URL of a Git repository (repo), clone it into your environment, push changes, and view commit history. In addition to this Git extension, you can also attach suggested Git repository URLs at the Amazon SageMaker AI domain or user profile level. Then, you can select the repo URL from the list of suggestions and clone that into your environment using the Git extension in Studio Classic. 

The following topics show how to attach Git repo URLs to a domain or user profile from the AWS CLI and SageMaker AI console. You'll also learn how to detach these repository URLs.

**Topics**
+ [Attach a Git Repository from the AWS CLI for Amazon SageMaker Studio Classic](studio-git-attach-cli.md)
+ [Attach a Git Repository from the SageMaker AI Console for Amazon SageMaker Studio Classic](studio-git-attach-console.md)
+ [Detach Git Repos from Amazon SageMaker Studio Classic](studio-git-detach.md)

# Attach a Git Repository from the AWS CLI for Amazon SageMaker Studio Classic
<a name="studio-git-attach-cli"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topic shows how to attach a Git repository URL using the AWS CLI, so that Amazon SageMaker Studio Classic automatically suggests it for cloning. After you attach the Git repository URL, you can clone it by following the steps in [Clone a Git Repository in Amazon SageMaker Studio Classic](studio-tasks-git.md).

## Prerequisites
<a name="studio-git-attach-cli-prerequisites"></a>

Before you begin, complete the following prerequisites: 
+ Update the AWS CLI by following the steps in [Installing the current AWS CLI Version](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html#install-tool-bundled).
+ From your local machine, run `aws configure` and provide your AWS credentials. For information about AWS credentials, see [Understanding and getting your AWS credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html). 
+ Onboard to Amazon SageMaker AI domain. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

## Attach the Git repo to a domain or user profile
<a name="studio-git-attach-cli-attach"></a>

Git repo URLs associated at the domain level are inherited by all users. However, Git repo URLs that are associated at the user profile level are scoped to a specific user. You can attach multiple Git repo URLs to a domain or user profile by passing a list of repository URLs.

The following sections show how to attach a Git repo URL to your domain and user profile.

### Attach to a domain
<a name="studio-git-attach-cli-attach-domain"></a>

The following command attaches a Git repo URL to an existing domain.

```
aws sagemaker update-domain --region region --domain-id domain-id \
    --default-user-settings JupyterServerAppSettings={CodeRepositories=[{RepositoryUrl="repository"}]}
```

### Attach to a user profile
<a name="studio-git-attach-cli-attach-userprofile"></a>

The following shows how to attach a Git repo URL to an existing user profile.

```
aws sagemaker update-user-profile --domain-id domain-id --user-profile-name user-name\
    --user-settings JupyterServerAppSettings={CodeRepositories=[{RepositoryUrl="repository"}]}
```

# Attach a Git Repository from the SageMaker AI Console for Amazon SageMaker Studio Classic
<a name="studio-git-attach-console"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topic shows how to associate a Git repository URL from the Amazon SageMaker AI console to clone it in your Studio Classic environment. After you associate the Git repository URL, you can clone it by following the steps in [Clone a Git Repository in Amazon SageMaker Studio Classic](studio-tasks-git.md).

## Prerequisites
<a name="studio-git-attach-console-prerequisites"></a>

Before you can begin this tutorial, you must onboard to Amazon SageMaker AI domain. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md).

## Attach the Git repo to a domain or user profile
<a name="studio-git-attach-console-attach"></a>

Git repo URLs associated at the domain level are inherited by all users. However, Git repo URL that are associated at the user profile level are scoped to a specific user. 

The following sections show how to attach a Git repo URL to a domain and user profile.

### Attach to a domain
<a name="studio-git-attach-console-attach-domain"></a>

**To attach a Git repo URL to an existing domain**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain to attach the Git repo to.

1. On the **domain details** page, choose the **Environment** tab.

1. On the **Suggested code repositories for the domain** tab, choose **Attach**.

1. Under **Source**, enter the Git repository URL.

1. Select **Attach to domain**.

### Attach to a user profile
<a name="studio-git-attach-console-attach-userprofile"></a>

The following shows how to attach a Git repository URL to an existing user profile.

**To attach a Git repository URL to a user profile**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain that includes the user profile to attach the Git repo to.

1. On the **domain details** page, choose the **User profiles** tab.

1. Select the user profile to attach the Git repo URL to.

1. On the **User details** page, choose **Edit**.

1. On the **Studio settings** page, choose **Attach** from the **Suggested code repositories for the user** section.

1. Under **Source**, enter the Git repository URL.

1. Choose **Attach to user**.

# Detach Git Repos from Amazon SageMaker Studio Classic
<a name="studio-git-detach"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

This guide shows how to detach Git repository URLs from an Amazon SageMaker AI domain or user profile using the AWS CLI or Amazon SageMaker AI console.

**Topics**
+ [Detach a Git repo using the AWS CLI](#studio-git-detach-cli)
+ [Detach the Git repo using the SageMaker AI console](#studio-git-detach-console)

## Detach a Git repo using the AWS CLI
<a name="studio-git-detach-cli"></a>

To detach all Git repo URLs from a domain or user profile, you must pass an empty list of code repositories. This list is passed as part of the `JupyterServerAppSettings` parameter in an `update-domain` or `update-user-profile` command. To detach only one Git repo URL, pass the code repositories list without the desired Git repo URL. This section shows how to detach all Git repo URLs from your domain or user profile using the AWS Command Line Interface (AWS CLI).

### Detach from a domain
<a name="studio-git-detach-cli-domain"></a>

The following command detaches all Git repo URLs from a domain.

```
aws sagemaker update-domain --region region --domain-name domain-name \
    --domain-settings JupyterServerAppSettings={CodeRepositories=[]}
```

### Detach from a user profile
<a name="studio-git-detach-cli-userprofile"></a>

The following command detaches all Git repo URLs from a user profile.

```
aws sagemaker update-user-profile --domain-name domain-name --user-profile-name user-name\
    --user-settings JupyterServerAppSettings={CodeRepositories=[]}
```

## Detach the Git repo using the SageMaker AI console
<a name="studio-git-detach-console"></a>

The following sections show how to detach a Git repo URL from a domain or user profile using the SageMaker AI console.

### Detach from a domain
<a name="studio-git-detach-console-domain"></a>

Use the following steps to detach a Git repo URL from an existing domain.

**To detach a Git repo URL from an existing domain**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain with the Git repo URL that you want to detach.

1. On the **domain details** page, choose the **Environment** tab.

1. On the **Suggested code repositories for the domain** tab, select the Git repository URL to detach.

1. Choose **Detach**.

1. From the new window, choose **Detach**.

### Detach from a user profile
<a name="studio-git-detach-console-userprofile"></a>

Use the following steps to detach a Git repo URL from a user profile.

**To detach a Git repo URL from a user profile**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain that includes the user profile with the Git repo URL that you want to detach.

1. On the **domain details** page, choose the **User profiles** tab.

1. Select the user profile with the Git repo URL that you want to detach.

1. On the **User details** page, choose **Edit**.

1. On the **Studio settings** page, select the Git repo URL to detach from the **Suggested code repositories for the user** tab.

1. Choose **Detach**.

1. From the new window, choose **Detach**.

# Perform Common Tasks in Amazon SageMaker Studio Classic
<a name="studio-tasks"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following sections describe how to perform common tasks in Amazon SageMaker Studio Classic. For an overview of the Studio Classic interface, see [Amazon SageMaker Studio Classic UI Overview](studio-ui.md).

**Topics**
+ [Upload Files to Amazon SageMaker Studio Classic](studio-tasks-files.md)
+ [Clone a Git Repository in Amazon SageMaker Studio Classic](studio-tasks-git.md)
+ [Stop a Training Job in Amazon SageMaker Studio Classic](studio-tasks-stop-training-job.md)
+ [Use TensorBoard in Amazon SageMaker Studio Classic](studio-tensorboard.md)
+ [Use Amazon Q Developer with Amazon SageMaker Studio Classic](sm-q.md)
+ [Manage Your Amazon EFS Storage Volume in Amazon SageMaker Studio Classic](studio-tasks-manage-storage.md)
+ [Provide Feedback on Amazon SageMaker Studio Classic](studio-tasks-provide-feedback.md)
+ [Shut Down and Update Amazon SageMaker Studio Classic and Apps](studio-tasks-update.md)

# Upload Files to Amazon SageMaker Studio Classic
<a name="studio-tasks-files"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

When you onboard to Amazon SageMaker Studio Classic, a home directory is created for you in the Amazon Elastic File System (Amazon EFS) volume that was created for your team. Studio Classic can only open files that have been uploaded to your directory. The Studio Classic file browser maps to your home directory.

**Note**  
Studio Classic does not support uploading folders. While you can only upload individual files, you can upload multiple files at the same time.

**To upload files to your home directory**

1. In the left sidebar, choose the **File Browser** icon ( ![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/folder.png)).

1. In the file browser, choose the **Upload Files** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/icons/File_upload_squid.png)).

1. Select the files you want to upload and then choose **Open**.

1. Double-click a file to open the file in a new tab in Studio Classic.

# Clone a Git Repository in Amazon SageMaker Studio Classic
<a name="studio-tasks-git"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic can only connect only to a local Git repository (repo). This means that you must clone the Git repo from within Studio Classic to access the files in the repo. Studio Classic offers a Git extension for you to enter the URL of a Git repo, clone it into your environment, push changes, and view commit history. If the repo is private and requires credentials to access, then you are prompted to enter your user credentials. This includes your username and personal access token. For more information about personal access tokens, see [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens).

Admins can also attach suggested Git repository URLs at the Amazon SageMaker AI domain or user profile level. Users can then select the repo URL from the list of suggestions and clone that into Studio Classic. For more information about attaching suggested repos, see [Attach Suggested Git Repos to Amazon SageMaker Studio Classic](studio-git-attach.md).

The following procedure shows how to clone a GitHub repo from Studio Classic. 

**To clone the repo**

1. In the left sidebar, choose the **Git** icon ( ![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/git.png)).

1. Choose **Clone a Repository**. This opens a new window.

1. In the **Clone Git Repository** window, enter the URL in the following format for the Git repo that you want to clone or select a repository from the list of **Suggested repositories**.

   ```
   https://github.com/path-to-git-repo/repo.git
   ```

1. If you entered the URL of the Git repo manually, select **Clone "*git-url*"** from the dropdown menu.

1. Under **Project directory to clone into**, enter the path to the local directory that you want to clone the Git repo into. If this value is left empty, Studio Classic clones the repo into JupyterLab's root directory.

1. Choose **Clone**. This opens a new terminal window.

1. If the repo requires credentials, you are prompted to enter your username and personal access token. This prompt does not accept passwords, you must use a personal access token. For more information about personal access tokens, see [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens).

1. Wait for the download to finish. After the repo has been cloned, the **File Browser** opens to display the cloned repo.

1. Double click the repo to open it.

1. Choose the **Git** icon to view the Git user interface which now tracks the repo.

1. To track a different repo, open the repo in the file browser and then choose the **Git** icon.

# Stop a Training Job in Amazon SageMaker Studio Classic
<a name="studio-tasks-stop-training-job"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can stop a training job with the Amazon SageMaker Studio Classic UI. When you stop a training job, its status changes to `Stopping` at which time billing ceases. An algorithm can delay termination in order to save model artifacts after which the job status changes to `Stopped`. For more information, see the [stop\$1training\$1job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.stop_training_job) method in the AWS SDK for Python (Boto3).

**To stop a training job**

1. Follow the [View experiments and runs](experiments-view-compare.md) procedure on this page until you open the **Describe Trial Component** tab.

1. At the upper-right side of the tab, choose **Stop training job**. The **Status** at the top left of the tab changes to **Stopped**.

1. To view the training time and billing time, choose **AWS Settings**.

# Use TensorBoard in Amazon SageMaker Studio Classic
<a name="studio-tensorboard"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

 The following doc outlines how to install and run TensorBoard in Amazon SageMaker Studio Classic. 

**Note**  
This guide shows how to open the TensorBoard application through a SageMaker Studio Classic notebook server of an individual SageMaker AI domain user profile. For a more comprehensive TensorBoard experience integrated with SageMaker Training and the access control functionalities of SageMaker AI domain, see [TensorBoard in Amazon SageMaker AI](tensorboard-on-sagemaker.md).

## Prerequisites
<a name="studio-tensorboard-prereq"></a>

This tutorial requires a SageMaker AI domain. For more information, see [Amazon SageMaker AI domain overview](gs-studio-onboard.md)

## Set Up `TensorBoardCallback`
<a name="studio-tensorboard-setup"></a>

1. Launch Studio Classic, and open the Launcher. For more information, see [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md)

1. In the Amazon SageMaker Studio Classic Launcher, under `Notebooks and compute resources`, choose the **Change environment** button.

1. On the **Change environment** dialog, use the dropdown menus to select the `TensorFlow 2.6 Python 3.8 CPU Optimized` Studio Classic **Image**.

1. Back to the Launcher, click the **Create notebook** tile. Your notebook launches and opens in a new Studio Classic tab.

1. Run this code from within your notebook cells.

1. Import the required packages. 

   ```
   import os
   import datetime
   import tensorflow as tf
   ```

1. Create a Keras model.

   ```
   mnist = tf.keras.datasets.mnist
   
   (x_train, y_train),(x_test, y_test) = mnist.load_data()
   x_train, x_test = x_train / 255.0, x_test / 255.0
   
   def create_model():
     return tf.keras.models.Sequential([
       tf.keras.layers.Flatten(input_shape=(28, 28)),
       tf.keras.layers.Dense(512, activation='relu'),
       tf.keras.layers.Dropout(0.2),
       tf.keras.layers.Dense(10, activation='softmax')
     ])
   ```

1. Create a directory for your TensorBoard logs

   ```
   LOG_DIR = os.path.join(os.getcwd(), "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
   ```

1. Run training with TensorBoard.

   ```
   model = create_model()
   model.compile(optimizer='adam',
                 loss='sparse_categorical_crossentropy',
                 metrics=['accuracy'])
                 
                 
   tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=LOG_DIR, histogram_freq=1)
   
   model.fit(x=x_train,
             y=y_train,
             epochs=5,
             validation_data=(x_test, y_test),
             callbacks=[tensorboard_callback])
   ```

1. Generate the EFS path for the TensorBoard logs. You use this path to set up your logs from the terminal.

   ```
   EFS_PATH_LOG_DIR = "/".join(LOG_DIR.strip("/").split('/')[1:-1])
   print (EFS_PATH_LOG_DIR)
   ```

   Retrieve the `EFS_PATH_LOG_DIR`. You will need it in the TensorBoard installation section.

## Install TensorBoard
<a name="studio-tensorboard-install"></a>

1. Click on the  `Amazon SageMaker Studio Classic` button on the top left corner of Studio Classic to open the Amazon SageMaker Studio Classic Launcher. This launcher must be opened from your root directory. For more information, see [Use the Amazon SageMaker Studio Classic Launcher](studio-launcher.md)

1. In the Launcher, under `Utilities and files`, click `System terminal`. 

1. From the terminal, run the following commands. Copy `EFS_PATH_LOG_DIR` from the Jupyter notebook. You must run this from the `/home/sagemaker-user` root directory.

   ```
   pip install tensorboard
   tensorboard --logdir <EFS_PATH_LOG_DIR>
   ```

## Launch TensorBoard
<a name="studio-tensorboard-launch"></a>

1. To launch TensorBoard, copy your Studio Classic URL and replace `lab?` with `proxy/6006/` as follows. You must include the trailing `/` character.

   ```
   https://<YOUR_URL>.studio.region.sagemaker.aws/jupyter/default/proxy/6006/
   ```

1. Navigate to the URL to examine your results. 

# Use Amazon Q Developer with Amazon SageMaker Studio Classic
<a name="sm-q"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker Studio Classic is an integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. You can generate code recommendations and suggest improvements related to code issues by using Amazon Q Developer with Amazon SageMaker AI.

Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. In the context of an integrated AWS coding environment, Amazon Q can generate code recommendations based on developers' code, as well as their comments in natural language. 

Amazon Q has the most support for Java, Python, JavaScript, TypeScript, C\$1, Go, PHP, Rust, Kotlin, and SQL, as well as the Infrastructure as Code (IaC) languages JSON (CloudFormation), YAML (CloudFormation), HCL (Terraform), and CDK (Typescript, Python). It also supports code generation for Ruby, C\$1\$1, C, Shell, and Scala. For examples of how Amazon Q integrates with Amazon SageMaker AI and displays code suggestions in the Amazon SageMaker Studio Classic IDE, see [Code Examples](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/inline-suggestions-code-examples.html) in the *Amazon Q Developer User Guide*.

For more information on using Amazon Q with Amazon SageMaker Studio Classic, see the [Amazon Q Developer User Guide](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/sagemaker-setup.html).

# Manage Your Amazon EFS Storage Volume in Amazon SageMaker Studio Classic
<a name="studio-tasks-manage-storage"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The first time a user on your team onboards to Amazon SageMaker Studio Classic, Amazon SageMaker AI creates an Amazon Elastic File System (Amazon EFS) volume for the team. A home directory is created in the volume for each user who onboards to Studio Classic as part of your team. Notebook files and data files are stored in these directories. Users don't have access to other team member's home directories. Amazon SageMaker AI domain does not support mounting custom or additional Amazon EFS volumes.

**Important**  
Don't delete the Amazon EFS volume. If you delete it, the domain will no longer function and all of your users will lose their work.

**To find your Amazon EFS volume**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. From the **Domains** page, select the domain to find the ID for.

1. From the **Domain details** page, select the **Domain settings** tab.

1. Under **General settings**, find the **Domain ID**. The ID will be in the following format: `d-xxxxxxxxxxxx`.

1. Pass the `Domain ID`, as `DomainId`, to the [describe\$1domain](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.describe_domain) method.

1. In the response from `describe_domain`, note the value for the `HomeEfsFileSystemId` key. This is the Amazon EFS file system ID.

1. Open the [Amazon EFS console](https://console.aws.amazon.com/efs#/file-systems/). Make sure the AWS Region is the same Region that's used by Studio Classic.

1. Under **File systems**, choose the file system ID from the previous step.

1. To verify that you've chosen the correct file system, select the **Tags** heading. The value corresponding to the `ManagedByAmazonSageMakerResource` key should match the `Studio Classic ID`.

For information on how to access the Amazon EFS volume, see [Using file systems in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/using-fs.html).

To delete the Amazon EFS volume, see [Deleting an Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/delete-efs-fs.html).

# Provide Feedback on Amazon SageMaker Studio Classic
<a name="studio-tasks-provide-feedback"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Amazon SageMaker AI takes your feedback seriously. We encourage you to provide feedback.

**To provide feedback**

1. At the right of SageMaker Studio Classic, find the **Feedback** icon (![\[Speech bubble icon representing messaging or communication functionality.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/studio/icons/feedback.png)).

1. Choose a smiley emoji to let us know how satisfied you are with SageMaker Studio Classic and add any feedback you'd care to share with us.

1. Decide whether to share your identity with us, then choose **Submit**.

# Shut Down and Update Amazon SageMaker Studio Classic and Apps
<a name="studio-tasks-update"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following topics show how to shut down and update SageMaker Studio Classic and Studio Classic Apps.

Studio Classic provides a notification icon (![\[Red circle icon with white exclamation mark, indicating an alert or warning.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/icons/Notification.png)) in the upper-right corner of the Studio Classic UI. This notification icon displays the number of unread notices. To read the notices, select the icon.

Studio Classic provides two types of notifications:
+ Upgrade – Displayed when Studio Classic or one of the Studio Classic apps have released a new version. To update Studio Classic, see [Shut Down and Update Amazon SageMaker Studio Classic](studio-tasks-update-studio.md). To update Studio Classic apps, see [Shut Down and Update Amazon SageMaker Studio Classic Apps](studio-tasks-update-apps.md).
+ Information – Displayed for new features and other information.

To reset the notification icon, you must select the link in each notice. Read notifications may still display in the icon. This does not indicate that updates are still needed after you have updated Studio Classic and Studio Classic Apps.

To learn how to update [Amazon SageMaker Data Wrangler](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler.html), see [Shut Down and Update Amazon SageMaker Studio Classic Apps](studio-tasks-update-apps.md).

To ensure that you have the most recent software updates, update Amazon SageMaker Studio Classic and your Studio Classic apps using the methods outlined in the following topics.

**Topics**
+ [Shut Down and Update Amazon SageMaker Studio Classic](studio-tasks-update-studio.md)
+ [Shut Down and Update Amazon SageMaker Studio Classic Apps](studio-tasks-update-apps.md)

# Shut Down and Update Amazon SageMaker Studio Classic
<a name="studio-tasks-update-studio"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

To update Amazon SageMaker Studio Classic to the latest release, you must shut down the JupyterServer app. You can shut down the JupyterServer app from the SageMaker AI console, from Amazon SageMaker Studio or from within Studio Classic. After the JupyterServer app is shut down, you must reopen Studio Classic through the SageMaker AI console or from Studio which creates a new version of the JupyterServer app. 

You cannot delete the JupyterServer application while the Studio Classic UI is still open in the browser. If you delete the JupyterServer application while the Studio Classic UI is still open in the browser, SageMaker AI automatically re-creates the JupyterServer application.

Any unsaved notebook information is lost in the process. The user data in the Amazon EFS volume isn't impacted.

Some of the services within Studio Classic, like Data Wrangler, run on their own app. To update these services you must delete the app for that service. To learn more, see [Shut Down and Update Amazon SageMaker Studio Classic Apps](studio-tasks-update-apps.md).

**Note**  
A JupyterServer app is associated with a single Studio Classic user. When you update the app for one user it doesn't affect other users.

The following page shows how to update the JupyterServer App from the SageMaker AI console, from Studio, or from inside Studio Classic.

## Shut down and update from the SageMaker AI console
<a name="studio-tasks-update-studio-console"></a>

1. Navigate to [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain that includes the Studio Classic application that you want to update.

1. Under **User profiles**, select your user name.

1. Under **Apps**, in the row displaying **JupyterServer**, choose **Action**, then choose **Delete**.

1. Choose **Yes, delete app**.

1. Type **delete** in the confirmation box.

1. Choose **Delete**.

1. After the app has been deleted, launch a new Studio Classic app to get the latest version.

## Shut down and update from Studio
<a name="studio-tasks-update-studio-updated"></a>

1. Navigate to Studio following the steps in [Launch Amazon SageMaker Studio](studio-updated-launch.md).

1. From the Studio UI, find the applications pane on the left side.

1. From the applications pane, select **Studio Classic**.

1. From the Studio Classic landing page, select the Studio Classic instance to stop.

1. Choose **Stop**.

1. After the app has been stopped, select **Run** to use the latest version.

## Shut down and update from inside Studio Classic
<a name="studio-tasks-update-studio-classic"></a>

1. Launch Studio Classic.

1. On the top menu, choose **File** then **Shut Down**.

1. Choose one of the following options:
   + **Shutdown Server** – Shuts down the JupyterServer app. Terminal sessions, kernel sessions, SageMaker images, and instances aren't shut down. These resources continue to accrue charges.
   + **Shutdown All** – Shuts down all apps, terminal sessions, kernel sessions, SageMaker images, and instances. These resources no longer accrue charges.

1. Close the window.

1. After the app has been deleted, launch a new Studio Classic app to use the latest version.

# Shut Down and Update Amazon SageMaker Studio Classic Apps
<a name="studio-tasks-update-apps"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

To update an Amazon SageMaker Studio Classic app to the latest release, you must first shut down the corresponding KernelGateway app from the SageMaker AI console. After the KernelGateway app is shut down, you must reopen it through SageMaker Studio Classic by running a new kernel. The kernel automatically updates. Any unsaved notebook information is lost in the process. The user data in the Amazon EFS volume isn't impacted.

After an application has been shut down for 24 hours, SageMaker AI deletes all metadata for the application. To be considered an update and retain application metadata, applications must be restarted within 24 hours after the previous application has been shut down. After this time window, creation of an application is considered a new application rather than an update of the previous application.

**Note**  
A KernelGateway app is associated with a single Studio Classic user. When you update the app for one user it doesn't effect other users.

**To update the KernelGateway app**

1. Navigate to [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. On the left navigation pane, choose **Admin configurations**.

1. Under **Admin configurations**, choose **domains**. 

1. Select the domain that includes the application that you want to update.

1. Under **User profiles**, select your user name.

1. Under **Apps**, in the row displaying the **App name**, choose **Action**, then choose **Delete** 

   To update Data Wrangler, delete the app that starts with **sagemaker-data-wrang**.

1. Choose **Yes, delete app**.

1. Type **delete** in the confirmation box.

1. Choose **Delete**.

1. After the app has been deleted, launch a new kernel from within Studio Classic to use the latest version.

# Amazon SageMaker Studio Classic Pricing
<a name="studio-pricing"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

When the first member of your team onboards to Amazon SageMaker Studio Classic, Amazon SageMaker AI creates an Amazon Elastic File System (Amazon EFS) volume for the team. When this member, or any member of the team, opens Studio Classic, a home directory is created in the volume for the member. A storage charge is incurred for this directory. Subsequently, additional storage charges are incurred for the notebooks and data files stored in the member's home directory. For pricing information on Amazon EFS, see [Amazon EFS Pricing](https://aws.amazon.com/efs/pricing/).

Additional costs are incurred when other operations are run inside Studio Classic, for example, running a notebook, running training jobs, and hosting a model.

For information on the costs associated with using Studio Classic notebooks, see [Usage Metering for Amazon SageMaker Studio Classic Notebooks](notebooks-usage-metering.md).

For information about billing along with pricing examples, see [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/).

If Amazon SageMaker Studio is your default experience, see [Amazon SageMaker Studio pricing](studio-updated-cost.md) for more pricing information.

# Troubleshooting Amazon SageMaker Studio Classic
<a name="studio-troubleshooting"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[AWS managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

This topic describes how to troubleshoot common Amazon SageMaker Studio Classic issues during setup and use. The following are common errors that might occur while using Amazon SageMaker Studio Classic. Each error is followed by its solution.

## Studio Classic application issues
<a name="studio-troubleshooting-ui"></a>

 The following issues occur when launching and using the Studio Classic application.
+ **Screen not loading: Clearing workspace and waiting doesn't help**

  When launching the Studio Classic application, a pop-up displays the following message. No matter which option is selected, Studio Classic does not load. 

  ```
  Loading...
  The loading screen is taking a long time. Would you like to clear the workspace or keep waiting?
  ```

  The Studio Classic application can have a launch delay if multiple tabs are open in the Studio Classic workspace or several files are on Amazon EFS. This pop-up should disappear in a few seconds after the Studio Classic workspace is ready. 

  If you continue to see a loading screen with a spinner after selecting either of the options, there could be connectivity issues with the Amazon Virtual Private Cloud used by Studio Classic.  

  To resolve connectivity issues with the Amazon Virtual Private Cloud (Amazon VPC) used by Studio Classic, verify the following networking configurations:
  + If your domain is set up in `VpcOnly` mode: Verify that there is an Amazon VPC endpoint for AWS STS, or a NAT Gateway for outbound traffic, including traffic over the internet. To do this, follow the steps in [Connect Studio notebooks in a VPC to external resources](studio-notebooks-and-internet-access.md). 
  + If your Amazon VPC is set up with a custom DNS instead of the DNS provided by Amazon: Verify that the routes are configured using Dynamic Host Configuration Protocol (DHCP) for each Amazon VPC endpoint added to the Amazon VPC used by Studio Classic. For more information about setting default and custom DHCP option sets, see [DHCP option sets in Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html). 
+ ****Internal Failure** when launching Studio Classic**

  When launching Studio Classic, you are unable to view the Studio Classic UI. You also see an error similar to the following, with **Internal Failure** as the error detail. 

  ```
  Amazon SageMaker Studio
  The JupyterServer app default encountered a problem and was stopped.
  ```

  This error can be caused by multiple factors. If completion of these steps does not resolve your issue, create an issue with https://aws.amazon.com/premiumsupport/.  
  + **Missing Amazon EFS mount target**: Studio Classic uses Amazon EFS for storage. The Amazon EFS volume needs a mount target for each subnet that the Amazon SageMaker AI domain is created in. If this Amazon EFS mount target is deleted accidentally, the Studio Classic application cannot load because it cannot mount the user’s file directory. To resolve this issue, complete the following steps. 

**To verify or create mount targets.**

    1. Find the Amazon EFS volume that is associated with the domain by using the [DescribeDomain](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeDomain.html) API call.  

    1. Sign in to the AWS Management Console and open the Amazon EFS console at [ https://console.aws.amazon.com/efs/](https://console.aws.amazon.com/efs/).

    1. From the list of Amazon EFS volumes, select the Amazon EFS volume that is associated with the domain. 

    1. On the Amazon EFS details page, select the **Network** tab. Verify that there are mount targets for all of the subnets that the domain is set up in. 

    1. If mount targets are missing, add the missing Amazon EFS mount targets. For instructions, see [Creating and managing mount targets and security groups](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html). 

    1. After the missing mount targets are created, launch the Studio Classic application. 
  + **Conflicting files in the user’s `.local` folder**: If you're using JupyterLab version 1 on Studio Classic, conflicting libraries in your `.local` folder can cause issues when launching the Studio Classic application. To resolve this, update your user profile's default JupyterLab version to JupyterLab 3.0. For more information about viewing and updating the JupyterLab version, see [JupyterLab Versioning in Amazon SageMaker Studio Classic](studio-jl.md). 
+ ****ConfigurationError: LifecycleConfig** when launching Studio Classic**

  You can't view the Studio Classic UI when launching Studio Classic. This is caused by issues with the default lifecycle configuration script attached to the domain.

**To resolve lifecycle configuration issues**

  1. View the Amazon CloudWatch Logs for the lifecycle configuration to trace the command that caused the failure. To view the log, follow the steps in [Verify lifecycle configuration process from CloudWatch Logs](studio-lcc-debug.md#studio-lcc-debug-logs). 

  1. Detach the default script from the user profile or domain. For more information, see [Update and Detach Lifecycle Configurations in Amazon SageMaker Studio Classic](studio-lcc-delete.md). 

  1. Launch the Studio Classic application. 

  1. Debug your lifecycle configuration script. You can run the lifecycle configuration script from the system terminal to troubleshoot. When the script runs successfully from the terminal, you can attach the script to the user profile or the domain. 
+ **SageMaker Studio Classic core functionalities are not available.**

  If you get this error message when opening Studio Classic, it may be due to Python package version conflicts. This occurs if you used the following commands in a notebook or terminal to install Python packages that have version conflicts with SageMaker AI package dependencies.

  ```
  !pip install
  ```

  ```
  pip install --user
  ```

  To resolve this issue, complete the following steps:

  1. Uninstall recently installed Python packages. If you’re not sure which package to uninstall, create an issue with https://aws.amazon.com/premiumsupport/. 

  1. Restart Studio Classic:

     1. Shut down Studio Classic from the **File** menu.

     1. Wait for one minute.

     1. Reopen Studio Classic by refreshing the page or opening it from the AWS Management Console.

  The problem should be resolved if you have uninstalled the package which caused the conflict. To install packages without causing this issue again, use `%pip install` without the `--user` flag.

  If the issue persists, create a new user profile and set up your environment with that user profile.

  If these solutions don't fix the issue, create an issue with https://aws.amazon.com/premiumsupport/. 
+ **Unable to open Studio Classic from the AWS Management Console.**

  If you are unable to open Studio Classic and cannot make a new running instance with all default settings, create an issue with https://aws.amazon.com/premiumsupport/. 

## KernelGateway application issues
<a name="studio-troubleshooting-kg"></a>

 The following issues are specific to KernelGateway applications that are launched in Studio Classic. 
+ **Cannot access the Kernel session**

  When the user launches a new notebook, they are unable to connect to the notebook session. If the KernelGateway application's status is `In Service`, you can verify the following to resolve the issue. 
  + **Check Security Group configurations**

    If the domain is set up in `VPCOnly` mode, the security group associated with the domain must allow traffic between the ports in the range `8192-65535` for connectivity between the JupyterServer and KernelGateway apps.

**To verify the security group rules**

    1. Get the security groups associated with the domain using the [DescribeDomain](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeDomain.html) API call.

    1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

    1. From the left navigation, under **Security**, choose **Security Groups**.

    1. Filter by the IDs of the security groups that are associated with the domain.

    1. For each security group: 

       1. Select the security group. 

       1. From the security group details page, view the **Inbound rules**. Verify that traffic is allowed between ports in the range `8192-65535`. 

    For more information about security group rules, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#working-with-security-group-rules). For more information about requirements to use Studio Classic in `VPCOnly` mode, see [Connect Studio notebooks in a VPC to external resources](studio-notebooks-and-internet-access.md).
  + **Verify firewall and WebSocket connections**

    If the KernelGateway apps have an `InService` status and the user is unable to connect to the Studio Classic notebook session, verify the firewall and WebSocket settings. 

    1. Launch the Studio Classic application. For more information, see [Launch Amazon SageMaker Studio Classic](studio-launch.md). 

    1. Open your web browser’s developer tools. 

    1. Choose the **Network** tab. 

    1. Search for an entry that matches the following format.

       ```
       wss://<domain-id>.studio.<region>.sagemaker.aws/jupyter/default/api/kernels/<unique-code>/channels?session_id=<unique-code>
       ```

       If the status or response code for the entry is anything other than `101`, then your network settings are preventing the connection between the Studio Classic application and the KernelGateway apps.

       To resolve this issue, contact the team that manages your networking settings to allow list the Studio Classic URL and enable WebSocket connections.  
+ **Unable to launch an app caused by exceeded resource quotas**

  When a user tries to launch a new notebook, the notebook creation fails with either of the following errors. This is caused by exceeding resource quotas. 
  + 

    ```
    Unable to start more Apps of AppType [KernelGateway] and ResourceSpec(instanceType=[]) for UserProfile []. Please delete an App with a matching AppType and ResourceSpec, then try again
    ```

    Studio Classic supports up to four running KernelGateway apps on the same instance. To resolve this issue, you can do either of the following:
    + Delete an existing KernelGateway application running on the instance, then restart the new notebook.
    + Start the new notebook on a different instance type

     For more information, see [Change the Instance Type for an Amazon SageMaker Studio Classic Notebook](notebooks-run-and-manage-switch-instance-type.md).
  + 

    ```
    An error occurred (ResourceLimitExceeded) when calling the CreateApp operation
    ```

    In this case, the account does not have sufficient limits to create a Studio Classic application on the specified instance type. To resolve this, navigate to the Service Quotas console at [https://console.aws.amazon.com/servicequotas/](https://console.aws.amazon.com/servicequotas/). In that console, request to increase the `Studio KernelGateway Apps running on instance-type instance` limit. For more information, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html). 