Semantic Segmentation Hyperparameters
The following tables list the hyperparameters supported by the Amazon SageMaker semantic
segmentation algorithm for network architecture, data inputs, and training. You specify
Semantic
Segmentation for training in the AlgorithmName
of the
CreateTrainingJob
request.
Network Architecture Hyperparameters
Parameter Name | Description |
---|---|
backbone |
The backbone to use for the algorithm's encoder component. Optional Valid values: Default value: |
use_pretrained_model |
Whether a pretrained model is to be used for the backbone. Optional Valid values: Default value: |
algorithm |
The algorithm to use for semantic segmentation. Optional Valid values: Default value: |
Data Hyperparameters
Parameter Name | Description |
---|---|
num_classes |
The number of classes to segment. Required Valid values: 2 ≤ positive integer ≤ 254 |
num_training_samples |
The number of samples in the training data. The algorithm uses this value to set up the learning rate scheduler. Required Valid values: positive integer |
base_size |
Defines how images are rescaled before cropping. Images are
rescaled such that the long size length is set to Optional Valid values: positive integer > 16 Default value: 520 |
crop_size |
The image size for input during training.
We randomly rescale the input image based on Optional Valid values: positive integer > 16 Default value: 240 |
Training Hyperparameters
Parameter Name | Description |
---|---|
early_stopping |
Whether to use early stopping logic during training. Optional Valid values: Default value: |
early_stopping_min_epochs |
The minimum number of epochs that must be run. Optional Valid values: integer Default value: 5 |
early_stopping_patience |
The number of epochs that meet the tolerance for lower performance before the algorithm enforces an early stop. Optional Valid values: integer Default value: 4 |
early_stopping_tolerance |
If the relative improvement of the score of the training job, the
mIOU, is smaller than this value, early stopping considers the epoch
as not improved. This is used only when Optional Valid values: 0 ≤ float ≤ 1 Default value: 0.0 |
epochs |
The number of epochs with which to train. Optional Valid values: positive integer Default value: 10 |
gamma1 |
The decay factor for the moving average of the squared gradient
for Optional Valid values: 0 ≤ float ≤ 1 Default value: 0.9 |
gamma2 |
The momentum factor for Optional Valid values: 0 ≤ float ≤ 1 Default value: 0.9 |
learning_rate |
The initial learning rate. Optional Valid values: 0 < float ≤ 1 Default value: 0.001 |
lr_scheduler |
The shape of the learning rate schedule that controls its decrease over time. Optional Valid values:
Default value: |
lr_scheduler_factor |
If Optional Valid values: 0 ≤ float ≤ 1 Default value: 0.1 |
lr_scheduler_step |
A comma delimited list of the epochs after which the
Conditionally Required if
Valid values: string Default value: (No default, as the value is required when used.) |
mini_batch_size |
The batch size for training. Using a large
Optional Valid values: positive integer Default value: 16 |
momentum |
The momentum for the Optional Valid values: 0 < float ≤ 1 Default value: 0.9 |
optimizer |
The type of optimizer. For more information about an optimizer, choose the appropriate link:
Optional Valid values: Default value: |
syncbn |
If set to Optional Valid values: Default value: |
validation_mini_batch_size |
The batch size for validation. A large
Optional Valid values: positive integer Default value: 16 |
weight_decay |
The weight decay coefficient for the Optional Valid values: 0 < float < 1 Default value: 0.0001 |