

# Inference Requests With a Deployed Service
<a name="neo-requests"></a>

If you have followed instructions in [Deploy a Model](neo-deployment-hosting-services.md), you should have a SageMaker AI endpoint set up and running. Regardless of how you deployed your Neo-compiled model, there are three ways you can submit inference requests: 

**Topics**
+ [Request Inferences from a Deployed Service (Amazon SageMaker SDK)](neo-requests-sdk.md)
+ [Request Inferences from a Deployed Service (Boto3)](neo-requests-boto3.md)
+ [Request Inferences from a Deployed Service (AWS CLI)](neo-requests-cli.md)