Discussion:
mpi_reduce error
(too old to reply)
Jason HK
2012-07-19 12:55:09 UTC
Permalink
Dear All,

I am working on a project and need to use MPI_REDUCE to collect (and sum up) the arrays calculated on all the computing nodes. Since the array consumes a lot of memory, I dont want to allocate a buffer to store the results. Is it possible to the array allocated on master node (iRank==0) to act as the buffer?

I tried to achieve this goal by doing follows,
call MPI_COMM_GROUP(MPI_COMM_WORLD, MPI_GROUP_World)
ranks(1)=0
call MPI_GROUP_EXCL(MPI_GROUP_World, 1, ranks, MPI_GROUP_Slaves)
call MPI_COMM_CREATE(MPI_COMM_WORLD, MPI_GROUP_Slaves, MPI_COMM_Slaves)
call MPI_REDUCE(Di, Di, blkSize*blkSize*Ny*2, MPI_DOUBLE_PRECISION, MPI_SUM, 0, MPI_COMM_Slaves, iErr)

However, it keeps on reporting segmentation fault when calling MPI_REDUCE

Can anybody tell me why? Is there any tricks in the implementation details of MPI_REDUCE?

Thanks in advance!
Jason
Georg Bisseling
2012-07-20 17:27:39 UTC
Permalink
You create a new communicator that is MPI_COMM_WORLD \ {rank 0}.

But it seems that you let rank 0 take part in the call to the reduce
over this new communicator (no "if" there).

That may cause the problem.
Post by Jason HK
Dear All,
I am working on a project and need to use MPI_REDUCE to collect (and sum
up) the arrays calculated on all the computing nodes. Since the array
consumes a lot of memory, I dont want to allocate a buffer to store the
results. Is it possible to the array allocated on master node (iRank==0)
to act as the buffer?
I tried to achieve this goal by doing follows,
call MPI_COMM_GROUP(MPI_COMM_WORLD, MPI_GROUP_World)
ranks(1)=0
call MPI_GROUP_EXCL(MPI_GROUP_World, 1, ranks, MPI_GROUP_Slaves)
call MPI_COMM_CREATE(MPI_COMM_WORLD, MPI_GROUP_Slaves, MPI_COMM_Slaves)
call MPI_REDUCE(Di, Di, blkSize*blkSize*Ny*2, MPI_DOUBLE_PRECISION,
MPI_SUM, 0, MPI_COMM_Slaves, iErr)
However, it keeps on reporting segmentation fault when calling MPI_REDUCE
Can anybody tell me why? Is there any tricks in the implementation details of MPI_REDUCE?
Thanks in advance!
Jason
--
This signature was intentionally left almost blank.
http://www.this-page-intentionally-left-blank.org/
Loading...