Specifying GPUs in an Amazon ECS task definition
To use the GPUs on a container instance and the Docker GPU runtime, make sure that you designate the number of GPUs your container requires in the task definition. As containers that support GPUs are placed, the Amazon ECS container agent pins the desired number of physical GPUs to the appropriate container. The number of GPUs reserved for all containers in a task cannot exceed the number of available GPUs on the container instance the task is launched on. For more information, see Creating an Amazon ECS task definition using the console.
Important
If your GPU requirements aren't specified in the task definition, the task uses the default Docker runtime.
The following shows the JSON format for the GPU requirements in a task definition:
{ "containerDefinitions": [ { ... "resourceRequirements" : [ { "type" : "GPU", "value" : "
2
" } ], }, ... }
The following example demonstrates the syntax for a Docker container that specifies a
GPU requirement. This container uses two GPUs, runs the nvidia-smi
utility,
and then exits.
{ "containerDefinitions": [ { "memory": 80, "essential": true, "name": "gpu", "image": "nvidia/cuda:11.0.3-base", "resourceRequirements": [ { "type":"GPU", "value": "2" } ], "command": [ "sh", "-c", "nvidia-smi" ], "cpu": 100 } ], "family": "example-ecs-gpu" }