Inference Requests With a Deployed Service - Amazon SageMaker AI

Inference Requests With a Deployed Service

If you have followed instructions in Deploy a Model, you should have a SageMaker AI endpoint set up and running. Regardless of how you deployed your Neo-compiled model, there are three ways you can submit inference requests: