Manage Model
The Edge Manager agent can load multiple models at a time and make inference with loaded models on edge devices. The number of models the agent can load is determined by the available memory on the device. The agent validates the model signature and loads into memory all the artifacts produced by the edge packaging job. This step requires all the required certificates described in previous steps to be installed along with rest of the binary installation. If the model’s signature cannot be validated, then loading of the model fails with appropriate return code and reason.
SageMaker Edge Manager agent provides a list of Model Management APIs that implement control plane and data plane APIs on edge devices. Along with this documentation, we recommend going through the sample client implementation which shows canonical usage of the below described APIs.
The proto
file is available as a part of the release artifacts
(inside the release tarball). In this doc, we list and describe the
usage of APIs listed in this proto
file.
Note
There is one-to-one mapping for these APIs on Windows release and a sample code for an application implement in C# is shared with the release artifacts for Windows. Below instructions are for running the Agent as a standalone process, applicable for to the release artifacts for Linux.
Extract the archive based on your OS. Where VERSION
is broken into three
components: <MAJOR_VERSION>.<YYYY-MM-DD>-<SHA-7>
.
See Installing the Edge Manager agent for information on
how to obtain the release version (<MAJOR_VERSION>
),
time stamp of the release artifact (<YYYY-MM-DD>
),
and the repository commit ID (SHA-7
)
The release artifact hierarchy (after extracting the tar/zip
archive) is
shown below. The agent proto
file is available under api/
.
0.20201205.7ee4b0b ├── bin │ ├── sagemaker_edge_agent_binary │ └── sagemaker_edge_agent_client_example └── docs ├── api │ └── agent.proto ├── attributions │ ├── agent.txt │ └── core.txt └── examples └── ipc_example ├── CMakeLists.txt ├── sagemaker_edge_client.cc ├── sagemaker_edge_client_example.cc ├── sagemaker_edge_client.hh ├── sagemaker_edge.proto ├── README.md ├── shm.cc ├── shm.hh └── street_small.bmp
Load Model
The Edge Manager agent supports loading multiple models. This API validates the
model signature and loads into memory all the artifacts produced by the
EdgePackagingJob
operation. This step requires all the required
certificates to be installed along with rest of the agent binary installation. If the
model’s signature cannot be validated then this step fails with appropriate return code
and error messages in the log.
// perform load for a model // Note: // 1. currently only local filesystem paths are supported for loading models. // 2. multiple models can be loaded at the same time, as limited by available device memory // 3. users are required to unload any loaded model to load another model. // Status Codes: // 1. OK - load is successful // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 4. NOT_FOUND - model doesn't exist at the url // 5. ALREADY_EXISTS - model with the same name is already loaded // 6. RESOURCE_EXHAUSTED - memory is not available to load the model // 7. FAILED_PRECONDITION - model is not compiled for the machine. // rpc LoadModel(LoadModelRequest) returns (LoadModelResponse);
Unload Model
Unloads a previously loaded model. It is identified via
the model alias which was provided during loadModel
.
If the alias is not found or model is not loaded then
returns error.
// // perform unload for a model // Status Codes: // 1. OK - unload is successful // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 4. NOT_FOUND - model doesn't exist // rpc UnLoadModel(UnLoadModelRequest) returns (UnLoadModelResponse);
List Models
Lists all the loaded models and their aliases.
// // lists the loaded models // Status Codes: // 1. OK - unload is successful // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // rpc ListModels(ListModelsRequest) returns (ListModelsResponse);
Describe Model
Describes a model that is loaded on the agent.
// // Status Codes: // 1. OK - load is successful // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 4. NOT_FOUND - model doesn't exist at the url // rpc DescribeModel(DescribeModelRequest) returns (DescribeModelResponse);
Capture Data
Allows the client application to capture input and output tensors in Amazon S3 bucket, and optionally the auxiliary. The client application is expected to pass a unique capture ID along with each call to this API. This can be later used to query status of the capture.
// // allows users to capture input and output tensors along with auxiliary data. // Status Codes: // 1. OK - data capture successfully initiated // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 5. ALREADY_EXISTS - capture initiated for the given capture_id // 6. RESOURCE_EXHAUSTED - buffer is full cannot accept any more requests. // 7. OUT_OF_RANGE - timestamp is in the future. // 8. INVALID_ARGUMENT - capture_id is not of expected format. // rpc CaptureData(CaptureDataRequest) returns (CaptureDataResponse);
Get Capture Status
Depending on the models loaded the input and output tensors can
be large (for many edge devices). Capture to the cloud can be time
consuming. So the CaptureData()
is implemented as an asynchronous
operation. A capture ID is a unique identifier that the client provides
during capture data call, this ID can be used to query the status
of the asynchronous call.
// // allows users to query status of capture data operation // Status Codes: // 1. OK - data capture successfully initiated // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 4. NOT_FOUND - given capture id doesn't exist. // rpc GetCaptureDataStatus(GetCaptureDataStatusRequest) returns (GetCaptureDataStatusResponse);
Predict
The predict
API performs inference on a previously loaded
model. It accepts a request in the form of a tensor that is
directly fed into the neural network. The output is the output
tensor (or scalar) from the model. This is a blocking call.
// // perform inference on a model. // // Note: // 1. users can chose to send the tensor data in the protobuf message or // through a shared memory segment on a per tensor basis, the Predict // method with handle the decode transparently. // 2. serializing large tensors into the protobuf message can be quite expensive, // based on our measurements it is recommended to use shared memory of // tenors larger than 256KB. // 3. SMEdge IPC server will not use shared memory for returning output tensors, // i.e., the output tensor data will always send in byte form encoded // in the tensors of PredictResponse. // 4. currently SMEdge IPC server cannot handle concurrent predict calls, all // these call will be serialized under the hood. this shall be addressed // in a later release. // Status Codes: // 1. OK - prediction is successful // 2. UNKNOWN - unknown error has occurred // 3. INTERNAL - an internal error has occurred // 4. NOT_FOUND - when model not found // 5. INVALID_ARGUMENT - when tenors types mismatch // rpc Predict(PredictRequest) returns (PredictResponse);