CreateDataSourceFromRedshift
Creates a DataSource
from a database hosted on an Amazon Redshift cluster. A
DataSource
references data that can be used to perform either CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
CreateDataSourceFromRedshift
is an asynchronous operation. In response to CreateDataSourceFromRedshift
, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource
status to PENDING
.
After the DataSource
is created and ready for use, Amazon ML sets the Status
parameter to COMPLETED
.
DataSource
in COMPLETED
or PENDING
states can be
used to perform only CreateMLModel
, CreateEvaluation
, or CreateBatchPrediction
operations.
If Amazon ML can't accept the input source, it sets the Status
parameter to FAILED
and includes an error message in the Message
attribute of the GetDataSource
operation response.
The observations should be contained in the database hosted on an Amazon Redshift cluster
and should be specified by a SelectSqlQuery
query. Amazon ML executes an
Unload
command in Amazon Redshift to transfer the result set of
the SelectSqlQuery
query to S3StagingLocation
.
After the DataSource
has been created, it's ready for use in evaluations and
batch predictions. If you plan to use the DataSource
to train an
MLModel
, the DataSource
also requires a recipe. A recipe
describes how each input variable will be used in training an MLModel
. Will
the variable be included or excluded from training? Will the variable be manipulated;
for example, will it be combined with another variable or will it be split apart into
word combinations? The recipe provides answers to these questions.
You can't change an existing datasource, but you can copy and modify the settings from an
existing Amazon Redshift datasource to create a new datasource. To do so, call
GetDataSource
for an existing datasource and copy the values to a
CreateDataSource
call. Change the settings that you want to change and
make sure that all required fields have the appropriate values.
Request Syntax
{
"ComputeStatistics": boolean
,
"DataSourceId": "string
",
"DataSourceName": "string
",
"DataSpec": {
"DatabaseCredentials": {
"Password": "string
",
"Username": "string
"
},
"DatabaseInformation": {
"ClusterIdentifier": "string
",
"DatabaseName": "string
"
},
"DataRearrangement": "string
",
"DataSchema": "string
",
"DataSchemaUri": "string
",
"S3StagingLocation": "string
",
"SelectSqlQuery": "string
"
},
"RoleARN": "string
"
}
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- ComputeStatistics
-
The compute statistics for a
DataSource
. The statistics are generated from the observation data referenced by aDataSource
. Amazon ML uses the statistics internally duringMLModel
training. This parameter must be set totrue
if theDataSource
needs to be used forMLModel
training.Type: Boolean
Required: No
- DataSourceId
-
A user-supplied ID that uniquely identifies the
DataSource
.Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern:
[a-zA-Z0-9_.-]+
Required: Yes
- DataSourceName
-
A user-supplied name or description of the
DataSource
.Type: String
Length Constraints: Maximum length of 1024.
Pattern:
.*\S.*|^$
Required: No
- DataSpec
-
The data specification of an Amazon Redshift
DataSource
:-
DatabaseInformation -
-
DatabaseName
- The name of the Amazon Redshift database. -
ClusterIdentifier
- The unique ID for the Amazon Redshift cluster.
-
-
DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.
-
SelectSqlQuery - The query that is used to retrieve the observation data for the
Datasource
. -
S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the
SelectSqlQuery
query is stored in this location. -
DataSchemaUri - The Amazon S3 location of the
DataSchema
. -
DataSchema - A JSON string representing the schema. This is not required if
DataSchemaUri
is specified. -
DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the
DataSource
.Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
Type: RedshiftDataSpec object
Required: Yes
-
- RoleARN
-
A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:
-
A security group to allow Amazon ML to execute the
SelectSqlQuery
query on an Amazon Redshift cluster -
An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the
S3StagingLocation
Type: String
Length Constraints: Minimum length of 1. Maximum length of 110.
Required: Yes
-
Response Syntax
{
"DataSourceId": "string"
}
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- DataSourceId
-
A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the
DataSourceID
in the request.Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern:
[a-zA-Z0-9_.-]+
Errors
For information about the errors that are common to all actions, see Common Errors.
- IdempotentParameterMismatchException
-
A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
HTTP Status Code: 400
- InternalServerException
-
An error on the server occurred when trying to process a request.
HTTP Status Code: 500
- InvalidInputException
-
An error on the client occurred. Typically, the cause is an invalid input value.
HTTP Status Code: 400
Examples
The following is a sample request and response of the CreateDataSourceFromRedshift operation.
This example illustrates one usage of CreateDataSourceFromRedshift.
Sample Request
POST / HTTP/1.1
Host: machinelearning.<region>.<domain>
x-amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=contenttype;date;host;user-agent;x-amz-date;x-amz-target;x-amzn-requestid,Signature=<Signature>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Connection: Keep-Alive
X-Amz-Target: AmazonML_20141212.CreateDataSourceFromRedshift
{
"DataSourceId": "ds-exampleDatasourceId",
"DataSourceName": "exampleDatasourceName",
"DataSpec":
{
"DatabaseInformation":
{
"DatabaseName": "dev",
"ClusterIdentifier": "test-cluster-1234"
},
"SelectSqlQuery": "select * from table",
"DatabaseCredentials":
{
"Username": "foo",
"Password": "foo"
},
"S3StagingLocation": "s3://bucketName/",
"DataSchemaUri": "s3://bucketName/locationToUri/example.schema.json"},
"RoleARN": "arn:aws:iam::<awsAccountId>:role/username"
}
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: <RequestId>
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Date: <Date>
{"DataSourceId": "ds-exampleDatasourceId"}
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: