0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

JuliaでMPIのScatter使った計算

Posted at

前回に引き続き,JuliaでMPI使った計算です。Scatterでデータをばらまきます。rank 0から他のランクのプロセスにデータを送る例です。

Scatter!

Aallという配列のデータの等分割したデータをランク0から他のランクに送ります。Scatter!の場合は,送るデータのサイズは全部同じである必要があります。

using MPI

function test()
    MPI.Init() # MPI初期化
    comm = MPI.COMM_WORLD
    nprocs = MPI.Comm_size(comm)
    myrank = MPI.Comm_rank(comm)

    nmax=10
    ista, iend = parallel_range(1, nmax, nprocs, myrank)

    A = Array{Int}(undef, iend-ista+1)

    if myrank == 0
        Aall = ones(Int, nmax)
        for i = 1:nmax
            Aall[i] = i
        end
    end

    if myrank == 0
        MPI.Scatter!(UBuffer(Aall, iend-ista+1), A, 0, comm)
    else
        MPI.Scatter!(nothing, A, 0, comm)
    end

    println("A, myrank", A, ", ", myrank)

    MPI.Finalize()
end

function parallel_range(n1, n2, nprocs, irank)
    nb = div(n2-n1+1, nprocs)
    nm = (n2-n1+1)%nprocs
    ista = irank*nb+n1 + min(irank, nm)
    iend = ista+nb-1 
    if nm > irank
        iend = iend+1
    end
    return ista, iend
end

@time test()

10要素の配列を2分割して,データを送るのでrank 0には1から5,rank 1には6から10が入ります。

$ mpirun -n 2 julia MPI-scatter.jl 
A, myrankA, myrank[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], 0
1
  0.244625 seconds (273.49 k allocations: 15.826 MiB, 76.19% compilation time)
  0.239508 seconds (273.49 k allocations: 15.826 MiB, 80.11% compilation time)

Scatterv!

Scatterv!は異なる要素数のデータを送れます。Scatterv!を実行する前に送信するデータのサイズを計算します。下記ではsizesに送るデータの数を入れてます。

using MPI

function test()
    MPI.Init() # MPI初期化
    comm = MPI.COMM_WORLD
    nprocs = MPI.Comm_size(comm)
    myrank = MPI.Comm_rank(comm)

    jmax=11
    imax = 100
    sizes = zeros(Int64, nprocs)
    if myrank == 0
        for j=0:nprocs-1
            jsta, jend = parallel_range(1, jmax, nprocs, j)
            sizes[j+1] = imax*(jend-jsta+1)
        end
        println("sizes=", sizes, ", ", sum(sizes))
    end
    ista, iend = parallel_range(1, jmax, nprocs, myrank)

    A = Array{Int}(undef, imax, iend-ista+1)

    #println("A, myrank", A, ", ", myrank)
    if myrank == 0
        Aall = ones(Int, imax, jmax)
        for j=1:jmax
            for i = 1:imax
                Aall[i, j] = j
            end
        end
    end

    if myrank == 0
        MPI.Scatterv!(VBuffer(Aall, sizes), A, 0, comm)
    else
        MPI.Scatterv!(nothing, A, 0, comm)
    end

    println("A, myrank", A[1,:], ", ", myrank)

    MPI.Finalize()
end

function parallel_range(n1, n2, nprocs, irank)
    nb = div(n2-n1+1, nprocs)
    nm = (n2-n1+1)%nprocs
    ista = irank*nb+n1 + min(irank, nm)
    iend = ista+nb-1 
    if nm > irank
        iend = iend+1
    end
    return ista, iend
end

@time test()

実行結果。rank 0には600要素,rank 1には500要素入っています。

$ mpirun -n 2 julia MPI-scatterv.jl 
sizes=[600, 500], 1100
A, myrank[1, 2, 3, 4, 5, 6], 0
A, myrank[7, 8, 9, 10, 11], 1
  0.340811 seconds (273.49 k allocations: 15.830 MiB, 53.23% compilation time)
  0.358422 seconds (273.54 k allocations: 15.841 MiB, 50.95% compilation time)
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?