AWS::SageMaker::MonitoringSchedule BatchTransformInput
Input object for the batch transform job.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "DataCapturedDestinationS3Uri" :
String
, "DatasetFormat" :DatasetFormat
, "ExcludeFeaturesAttribute" :String
, "LocalPath" :String
, "S3DataDistributionType" :String
, "S3InputMode" :String
}
YAML
DataCapturedDestinationS3Uri:
String
DatasetFormat:DatasetFormat
ExcludeFeaturesAttribute:String
LocalPath:String
S3DataDistributionType:String
S3InputMode:String
Properties
DataCapturedDestinationS3Uri
-
The Amazon S3 location being used to capture the data.
Required: Yes
Type: String
Pattern:
^(https|s3)://([^/]+)/?(.*)$
Maximum:
512
Update requires: No interruption
DatasetFormat
-
The dataset format for your batch transform job.
Required: Yes
Type: DatasetFormat
Update requires: No interruption
ExcludeFeaturesAttribute
-
The attributes of the input data to exclude from the analysis.
Required: No
Type: String
Maximum:
100
Update requires: No interruption
LocalPath
-
Path to the filesystem where the batch transform data is available to the container.
Required: Yes
Type: String
Pattern:
.*
Maximum:
256
Update requires: No interruption
S3DataDistributionType
-
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to
FullyReplicated
Required: No
Type: String
Allowed values:
FullyReplicated | ShardedByS3Key
Update requires: No interruption
S3InputMode
-
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.Required: No
Type: String
Allowed values:
Pipe | File
Update requires: No interruption