Custom Inference Code with Batch Transform
This section explains how Amazon SageMaker AI interacts with a Docker container that runs your own inference code for batch transform. Use this information to write inference code and create a Docker image.
Topics
How SageMaker AI Runs Your Inference Image
To configure a container to run as an executable, use an ENTRYPOINT
            instruction in a Dockerfile. Note the following: 
- 
                For batch transforms, SageMaker AI invokes the model on your behalf. SageMaker AI runs the container as: docker runimageserveThe input to batch transforms must be of a format that can be split into smaller files to process in parallel. These formats include CSV, JSON , JSON Lines , TFRecord and RecordIO . SageMaker AI overrides default CMDstatements in a container by specifying theserveargument after the image name. Theserveargument overrides arguments that you provide with theCMDcommand in the Dockerfile.
- 
                We recommend that you use the execform of theENTRYPOINTinstruction:ENTRYPOINT ["executable", "param1", "param2"]For example: ENTRYPOINT ["python", "k_means_inference.py"]
- 
                SageMaker AI sets environment variables specified in CreateModelandCreateTransformJobon your container. Additionally, the following environment variables are populated:- 
                        SAGEMAKER_BATCHis set totruewhen the container runs batch transforms.
- 
                        SAGEMAKER_MAX_PAYLOAD_IN_MBis set to the largest size payload that is sent to the container via HTTP.
- 
                        SAGEMAKER_BATCH_STRATEGYis set toSINGLE_RECORDwhen the container is sent a single record per call to invocations andMULTI_RECORDwhen the container gets as many records as will fit in the payload.
- 
                        SAGEMAKER_MAX_CONCURRENT_TRANSFORMSis set to the maximum number of/invocationsrequests that can be opened simultaneously.
 NoteThe last three environment variables come from the API call made by the user. If the user doesn’t set values for them, they aren't passed. In that case, either the default values or the values requested by the algorithm (in response to the /execution-parameters) are used.
- 
                        
- 
                If you plan to use GPU devices for model inferences (by specifying GPU-based ML compute instances in your CreateTransformJobrequest), make sure that your containers are nvidia-docker compatible. Don't bundle NVIDIA drivers with the image. For more information about nvidia-docker, see NVIDIA/nvidia-docker. 
- 
                You can't use the initinitializer as your entry point in SageMaker AI containers because it gets confused by the train and serve arguments.
How SageMaker AI Loads Your Model Artifacts
In a CreateModel request, container definitions include the
                ModelDataUrl parameter, which identifies the location in Amazon S3 where
            model artifacts are stored. When you use SageMaker AI to run inferences, it uses this
            information to determine from where to copy the model artifacts. It copies the artifacts
            to the /opt/ml/model directory in the Docker container for use by
            your inference code.
The ModelDataUrl parameter must point to a tar.gz file. Otherwise, SageMaker AI
            can't download the file. If you train a model in SageMaker AI, it saves the artifacts as a
            single compressed tar file in Amazon S3. If you train a model in another framework, you need
            to store the model artifacts in Amazon S3 as a compressed tar file. SageMaker AI decompresses this
            tar file and saves it in the /opt/ml/model directory in the
            container before the batch transform job starts. 
How Containers Serve Requests
Containers must implement a web server that responds to invocations and ping requests on port 8080. For batch transforms, you have the option to set algorithms to implement execution-parameters requests to provide a dynamic runtime configuration to SageMaker AI. SageMaker AI uses the following endpoints:
- 
                ping—Used to periodically check the health of the container. SageMaker AI waits for an HTTP200status code and an empty body for a successful ping request before sending an invocations request. You might use a ping request to load a model into memory to generate inference when invocations requests are sent.
- 
                (Optional) execution-parameters—Allows the algorithm to provide the optimal tuning parameters for a job during runtime. Based on the memory and CPUs available for a container, the algorithm chooses the appropriateMaxConcurrentTransforms,BatchStrategy, andMaxPayloadInMBvalues for the job.
Before calling the invocations request, SageMaker AI attempts to invoke the
            execution-parameters request. When you create a batch transform job, you can provide
            values for the MaxConcurrentTransforms, BatchStrategy, and
                MaxPayloadInMB parameters. SageMaker AI determines the values for these
            parameters using this order of precedence:
- 
                The parameter values that you provide when you create the CreateTransformJobrequest.
- 
                The values that the model container returns when SageMaker AI invokes the execution-parameters endpoint> 
- 
                The default parameter values, listed in the following table. Parameter Default Values MaxConcurrentTransforms1 BatchStrategyMULTI_RECORDMaxPayloadInMB6 
The response for a GET execution-parameters request is a JSON object with
            keys for MaxConcurrentTransforms, BatchStrategy, and
                MaxPayloadInMB parameters. This is an example of a valid
            response:
{ “MaxConcurrentTransforms”: 8, “BatchStrategy": "MULTI_RECORD", "MaxPayloadInMB": 6 }
How Your Container Should Respond to Inference Requests
To obtain inferences, Amazon SageMaker AI sends a POST request to the inference container. The POST request body contains data from Amazon S3. Amazon SageMaker AI passes the request to the container, and returns the inference result from the container, saving the data from the response to Amazon S3.
To receive inference requests, the container must have a web server listening on port
            8080 and must accept POST requests to the /invocations endpoint. The
            inference request timeout and max retries can be configured through ModelClientConfig.
How Your Container Should Respond to Health Check (Ping) Requests
The simplest requirement on the container is to respond with an HTTP 200 status code
            and an empty body. This indicates to SageMaker AI that the container is ready to accept
            inference requests at the /invocations endpoint.
While the minimum bar is for the container to return a static 200, a container
            developer can use this functionality to perform deeper checks. The request timeout on
                /ping attempts is 2 seconds.