Managing inference endpoints using the endpoints
command
You use the Neptune ML endpoints
command to create an inference endpoint,
check its status, delete it, or list existing inference endpoints.
Creating an inference endpoint
using the Neptune ML endpoints
command
A Neptune ML endpoints
command for creating an inference endpoint
from a model created by a training job looks like this:
curl \ -X POST https://
(your Neptune endpoint)
/ml/endpoints -H 'Content-Type: application/json' \ -d '{ "id" : "(a unique ID for the new endpoint)
", "mlModelTrainingJobId": "(the model-training job-id of a completed job)
" }'
A Neptune ML endpoints
command for updating an existing inference endpoint
from a model created by a training job looks like this:
curl \ -X POST https://
(your Neptune endpoint)
/ml/endpoints -H 'Content-Type: application/json' \ -d '{ "id" : "(a unique ID for the new endpoint)
", "update" : "true", "mlModelTrainingJobId": "(the model-training job-id of a completed job)
" }'
A Neptune ML endpoints
command for creating an inference endpoint
from a model created by a model-transform job looks like this:
curl \ -X POST https://
(your Neptune endpoint)
/ml/endpoints -H 'Content-Type: application/json' \ -d '{ "id" : "(a unique ID for the new endpoint)
", "mlModelTransformJobId": "(the model-training job-id of a completed job)
" }'
A Neptune ML endpoints
command for updating an existing inference endpoint
from a model created by a model-transform job looks like this:
curl \ -X POST https://
(your Neptune endpoint)
/ml/endpoints -H 'Content-Type: application/json' \ -d '{ "id" : "(a unique ID for the new endpoint)
", "update" : "true", "mlModelTransformJobId": "(the model-training job-id of a completed job)
" }'
Parameters for endpoints
inference endpoint creation
-
id
– (Optional) A unique identifier for the new inference endpoint.Type: string. Default: An autogenerated timestamped name.
-
mlModelTrainingJobId
– The job Id of the completed model-training job that has created the model that the inference endpoint will point to.Type: string.
Note: You must supply either the
mlModelTrainingJobId
or themlModelTransformJobId
. -
mlModelTransformJobId
– The job Id of the completed model-transform job.Type: string.
Note: You must supply either the
mlModelTrainingJobId
or themlModelTransformJobId
. -
update
– (Optional) If present, this parameter indicates that this is an update request.Type: Boolean. Default:
false
Note: You must supply either the
mlModelTrainingJobId
or themlModelTransformJobId
. -
neptuneIamRoleArn
– (Optional) The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources.Type: string. Note: This must be listed in your DB cluster parameter group or an error will be thrown.
-
modelName
– (Optional) Model type for training. By default the ML model is automatically based on themodelType
used in data processing, but you can specify a different model type here.Type: string. Default:
rgcn
for heterogeneous graphs andkge
for knowledge graphs. Valid values: For heterogeneous graphs:rgcn
. For knowledge graphs:kge
,transe
,distmult
, orrotate
. -
instanceType
– (Optional) The type of ML instance used for online servicing.Type: string. Default:
ml.m5.xlarge
.Note: Choosing the ML instance for an inference endpoint depends on the task type, the graph size, and your budget. See Selecting an instance for an inference endpoint.
-
instanceCount
– (Optional) The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction.Type: integer. Default:
1
. -
volumeEncryptionKMSKey
– (Optional) The AWS Key Management Service (AWS KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the endpoints.Type: string. Default: none.
Getting the status
of an inference endpoint using the Neptune ML endpoints
command
A sample Neptune ML endpoints
command for the status of an instance
endpoint looks like this:
curl -s \ "https://
(your Neptune endpoint)
/ml/endpoints/(the inference endpoint ID)
" \ | python -m json.tool
Parameters for endpoints
instance-endpoint status
-
id
– (Required) The unique identifier of the inference endpoint.Type: string.
-
neptuneIamRoleArn
– (Optional) The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources.Type: string. Note: This must be listed in your DB cluster parameter group or an error will be thrown.
Deleting an instance
endpoint using the Neptune ML endpoints
command
A sample Neptune ML endpoints
command for deleting an instance
endpoint looks like this:
curl -s \ -X DELETE "https://
(your Neptune endpoint)
/ml/endpoints/(the inference endpoint ID)
"
Or this:
curl -s \ -X DELETE "https://
(your Neptune endpoint)
/ml/endpoints/(the inference endpoint ID)
?clean=true"
Parameters for endpoints
deleting an inference endpoint
-
id
– (Required) The unique identifier of the inference endpoint.Type: string.
-
neptuneIamRoleArn
– (Optional) The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources.Type: string. Note: This must be listed in your DB cluster parameter group or an error will be thrown.
-
clean
– (Optional) Indicates that all artifacts related to this endpoint should also be deleted.Type: Boolean. Default:
FALSE
.
Listing inference endpoints
using the Neptune ML endpoints
command
A Neptune ML endpoints
command for listing inference
endpoints looks like this:
curl -s "https://
(your Neptune endpoint)
/ml/endpoints" \ | python -m json.tool
Or this:
curl -s "https://
(your Neptune endpoint)
/ml/endpoints?maxItems=3" \ | python -m json.tool
Parameters for dataprocessing
list inference endpoints
-
maxItems
– (Optional) The maximum number of items to return.Type: integer. Default:
10
. Maximum allowed value:1024
. -
neptuneIamRoleArn
– (Optional) The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources.Type: string. Note: This must be listed in your DB cluster parameter group or an error will be thrown.