SpackによるIntel MPIのセットアップ
Spackのインストール
git clone https://github.com/spack/spack
. spack/share/spack/setup-env.sh
念のためSpackにおけるgccの最新をInstall
spack install gcc
spack load gcc
spack compiler find
$ spack compilers
==> Available compilers
-- gcc ubuntu20.04-x86_64 ---------------------------------------
gcc@9.3.0 gcc@13.2.0
$ spack compiler info gcc
gcc@9.3.0:
paths:
cc = /usr/bin/gcc
cxx = /usr/bin/g++
f77 = None
fc = None
modules = []
operating system = ubuntu20.04
gcc@13.2.0:
paths:
cc = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-9.3.0/gcc-13.2.0-bylopp4rj3rh6b3hxn2wueiwasc6vnpg/bin/gcc
cxx = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-9.3.0/gcc-13.2.0-bylopp4rj3rh6b3hxn2wueiwasc6vnpg/bin/g++
f77 = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-9.3.0/gcc-13.2.0-bylopp4rj3rh6b3hxn2wueiwasc6vnpg/bin/gfortran
fc = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-9.3.0/gcc-13.2.0-bylopp4rj3rh6b3hxn2wueiwasc6vnpg/bin/gfortran
modules = []
operating system = ubuntu20.04
OneAPIのインストール
$ spack list oneapi
intel-oneapi-advisor intel-oneapi-dal intel-oneapi-inspector intel-oneapi-mkl intel-oneapi-vtune
intel-oneapi-ccl intel-oneapi-dnn intel-oneapi-ipp intel-oneapi-mpi oneapi-igc
intel-oneapi-compilers intel-oneapi-dpct intel-oneapi-ippcp intel-oneapi-tbb oneapi-level-zero
intel-oneapi-compilers-classic intel-oneapi-dpl intel-oneapi-itac intel-oneapi-vpl
==> 19 packages
spack install intel-oneapi-compilers
spack load intel-oneapi-compilers
which icc
$ spack compiler find
==> Added 3 new compilers to /home/dnagao/.spack/linux/compilers.yaml
oneapi@2023.2.0 intel@2021.10.0 dpcpp@2023.2.0
==> Compilers are defined in the following files:
/home/dnagao/.spack/linux/compilers.yaml
$ spack compiler info oneapi
oneapi@2023.2.0:
paths:
cc = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-13.2.0/intel-oneapi-compilers-2023.2.1-whqtu5xf7ah2fggfrkhvwfcnrs3geyst/compiler/2023.2.1/linux/bin/icx
cxx = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-13.2.0/intel-oneapi-compilers-2023.2.1-whqtu5xf7ah2fggfrkhvwfcnrs3geyst/compiler/2023.2.1/linux/bin/icpx
f77 = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-13.2.0/intel-oneapi-compilers-2023.2.1-whqtu5xf7ah2fggfrkhvwfcnrs3geyst/compiler/2023.2.1/linux/bin/ifx
fc = /nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-13.2.0/intel-oneapi-compilers-2023.2.1-whqtu5xf7ah2fggfrkhvwfcnrs3geyst/compiler/2023.2.1/linux/bin/ifx
modules = []
operating system = ubuntu20.04
IntelMPI のインストール
spack install intel-oneapi-mpi
spack load intel-oneapi-mpi
Intel MPI Hello World
以下のソースを hello_mpi
で保存
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <unistd.h>
#include "mpi.h"
int main(int argc, char *argv[]) {
int numprocs, rank, namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
int iam = 0, np = 1;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processor_name, &namelen);
printf("Hello from process %2d out of %2d on %s\n",
rank, numprocs, processor_name);
MPI_Finalize();
}
コンパイル
source /opt/intel/bin/iccvars.sh -arch intel64 -platform linux && which icc
IMPI_INCLUDE="/nvmedata/dnagao/spack/opt/spack/linux-ubuntu20.04-zen2/gcc-13.2.0/intel-oneapi-mpi-2021.10.0-d66dcgst2sltc2jofuzgaclstr6mshq3/mpi/2021.10.0/include/"
mpicc hello_mpi.c -lm -o _hello_mpi
Slurmによるマルチノードジョブ投入
run.sh
の作成
#!/bin/bash
#SBATCH --job-name=hello_mpi
#SBATCH --ntasks-per-node=8
#SBATCH --nodes=3
source /nvmedata/dnagao/spack/share/spack/setup-env.sh
spack load oneapi
spack load intel-oneapi-compilers
which spack
which icc
which mpirun
echo "---- machinefile for MPI ----"
srun -n ${SLURM_NTASKS} hostname | sort > hosts.${SLURM_JOB_ID}
cat hosts.${SLURM_JOB_ID}
NODEFILE="hosts.${SLURM_JOB_ID}"
cat <<ETX
---- INPUT ENVIRONMENT VARIABLES ----
JOB_ID="${SLURM_JOB_ID}"
JOB_NAME="${SLURM_JOB_NAME}"
PARTITION_NAME="${SLURM_JOB_PARTITION}"
NODE_LIST="${SLURM_JOB_NODELIST}"
NODEFILE="${NODEFILE}"
NTASKS="${SLURM_NTASKS}"
ETX
echo "---- Hello MPI ----"
mpirun -n ${SLURM_NTASKS} -machinefile ${NODEFILE} ./hello_mpi
ジョブ投入
sbatch run.sh