AWS::SageMaker::ModelExplainabilityJobDefinition BatchTransformInput
Input object for the batch transform job.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "DataCapturedDestinationS3Uri" :
String
, "DatasetFormat" :DatasetFormat
, "FeaturesAttribute" :String
, "InferenceAttribute" :String
, "LocalPath" :String
, "ProbabilityAttribute" :String
, "S3DataDistributionType" :String
, "S3InputMode" :String
}
YAML
DataCapturedDestinationS3Uri:
String
DatasetFormat:DatasetFormat
FeaturesAttribute:String
InferenceAttribute:String
LocalPath:String
ProbabilityAttribute:String
S3DataDistributionType:String
S3InputMode:String
Properties
DataCapturedDestinationS3Uri
-
The Amazon S3 location being used to capture the data.
Required: Yes
Type: String
Pattern:
^(https|s3)://([^/]+)/?(.*)$
Maximum:
512
Update requires: Replacement
DatasetFormat
-
The dataset format for your batch transform job.
Required: Yes
Type: DatasetFormat
Update requires: Replacement
FeaturesAttribute
-
The attributes of the input data that are the input features.
Required: No
Type: String
Maximum:
256
Update requires: Replacement
InferenceAttribute
-
The attribute of the input data that represents the ground truth label.
Required: No
Type: String
Maximum:
256
Update requires: Replacement
LocalPath
-
Path to the filesystem where the batch transform data is available to the container.
Required: Yes
Type: String
Pattern:
.*
Maximum:
256
Update requires: Replacement
ProbabilityAttribute
-
In a classification problem, the attribute that represents the class probability.
Required: No
Type: String
Maximum:
256
Update requires: Replacement
S3DataDistributionType
-
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to
FullyReplicated
Required: No
Type: String
Allowed values:
FullyReplicated | ShardedByS3Key
Update requires: Replacement
S3InputMode
-
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.Required: No
Type: String
Allowed values:
Pipe | File
Update requires: Replacement