Jalankan pekerjaan MPI multi-node dengan Slurm di PCS AWS - AWS PCS

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Jalankan pekerjaan MPI multi-node dengan Slurm di PCS AWS

Instruksi ini menunjukkan menggunakan Slurm untuk menjalankan pekerjaan message passing interface (MPI) di PCS. AWS

Jalankan perintah berikut pada prompt shell dari node login Anda.

  • Menjadi pengguna default. Ubah ke direktori home nya.

    sudo su - ec2-user cd ~/
  • Buat kode sumber dalam bahasa pemrograman C.

    cat > hello.c << EOF // * mpi-hello-world - https://www.mpitutorial.com // Released under MIT License // // Copyright (c) 2014 MPI Tutorial. // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to // deal in the Software without restriction, including without limitation the // rights to use, copy, modify, merge, publish, distribute, sublicense, and/or // sell copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER // DEALINGS IN THE SOFTWARE. #include <mpi.h> #include <stdio.h> #include <stddef.h> int main(int argc, char** argv) { // Initialize the MPI environment. The two arguments to MPI Init are not // currently used by MPI implementations, but are there in case future // implementations might need the arguments. MPI_Init(NULL, NULL); // Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); // Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); // Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); // Print off a hello world message printf("Hello world from processor %s, rank %d out of %d processors\n", processor_name, world_rank, world_size); // Finalize the MPI environment. No more MPI calls can be made after this MPI_Finalize(); } EOF
  • Muat modul OpenMpi.

    module load openmpi
  • Kompilasi program C.

    mpicc -o hello hello.c
  • Tulis skrip pengiriman pekerjaan Slurm.

    cat > hello.sh << EOF #!/bin/bash #SBATCH -J multi #SBATCH -o multi.out #SBATCH -e multi.err #SBATCH --exclusive #SBATCH --nodes=4 #SBATCH --ntasks-per-node=1 srun $HOME/hello EOF
  • Ubah ke direktori bersama.

    cd /shared
  • Kirimkan skrip pekerjaan.

    sbatch -p demo ~/hello.sh
  • Gunakan squeue untuk memantau pekerjaan sampai selesai.

  • Periksa isimulti.out:

    cat multi.out

    Output Anda serupa dengan yang berikut ini. Perhatikan bahwa setiap peringkat memiliki alamat IP sendiri karena berjalan pada node yang berbeda.

    Hello world from processor ip-10-3-133-204, rank 0 out of 4 processors
    Hello world from processor ip-10-3-128-219, rank 2 out of 4 processors
    Hello world from processor ip-10-3-141-26, rank 3 out of 4 processors
    Hello world from processor ip-10-3-143-52, rank 1 out of 4 processor