Run a multi-node MPI job with Slurm in AWS PCS - AWS PCS

Run a multi-node MPI job with Slurm in AWS PCS

These instructions demonstrate using Slurm to run a message passing interface (MPI) job in AWS PCS.

Run the following commands at a shell prompt of your login node.

  • Become the default user. Change to its home directory.

    sudo su - ec2-user cd ~/
  • Create source code in the C programming language.

    cat > hello.c << EOF // * mpi-hello-world - https://www.mpitutorial.com // Released under MIT License // // Copyright (c) 2014 MPI Tutorial. // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to // deal in the Software without restriction, including without limitation the // rights to use, copy, modify, merge, publish, distribute, sublicense, and/or // sell copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER // DEALINGS IN THE SOFTWARE. #include <mpi.h> #include <stdio.h> #include <stddef.h> int main(int argc, char** argv) { // Initialize the MPI environment. The two arguments to MPI Init are not // currently used by MPI implementations, but are there in case future // implementations might need the arguments. MPI_Init(NULL, NULL); // Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); // Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); // Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); // Print off a hello world message printf("Hello world from processor %s, rank %d out of %d processors\n", processor_name, world_rank, world_size); // Finalize the MPI environment. No more MPI calls can be made after this MPI_Finalize(); } EOF
  • Load the OpenMPI module.

    module load openmpi
  • Compile the C program.

    mpicc -o hello hello.c
  • Write a Slurm job submission script.

    cat > hello.sh << EOF #!/bin/bash #SBATCH -J multi #SBATCH -o multi.out #SBATCH -e multi.err #SBATCH --exclusive #SBATCH --nodes=4 #SBATCH --ntasks-per-node=1 srun $HOME/hello EOF
  • Change to the shared directory.

    cd /shared
  • Submit the job script.

    sbatch -p demo ~/hello.sh
  • Use squeue to monitor the job until it's done.

  • Check the contents of multi.out:

    cat multi.out

    The output is similar to the following. Note that each rank has its own IP address because it ran on a different node.

    Hello world from processor ip-10-3-133-204, rank 0 out of 4 processors
    Hello world from processor ip-10-3-128-219, rank 2 out of 4 processors
    Hello world from processor ip-10-3-141-26, rank 3 out of 4 processors
    Hello world from processor ip-10-3-143-52, rank 1 out of 4 processor