You can use popular model servers, such as TorchServe, DJL Serving, and Triton Inference Server, to deploy your models on SageMaker AI. The following topics explain how.
Did this page help you? - Yes
Thanks for letting us know we're doing a good job!
If you've got a moment, please tell us what we did right so we can do more of it.
Did this page help you? - No
Thanks for letting us know this page needs work. We're sorry we let you down.
If you've got a moment, please tell us how we can make the documentation better.
Next topic:
Deploy models with TorchServePrevious topic:
Access containers through SSMNeed help?
PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.