Jason HK
2012-07-19 12:55:09 UTC
Dear All,
I am working on a project and need to use MPI_REDUCE to collect (and sum up) the arrays calculated on all the computing nodes. Since the array consumes a lot of memory, I dont want to allocate a buffer to store the results. Is it possible to the array allocated on master node (iRank==0) to act as the buffer?
I tried to achieve this goal by doing follows,
call MPI_COMM_GROUP(MPI_COMM_WORLD, MPI_GROUP_World)
ranks(1)=0
call MPI_GROUP_EXCL(MPI_GROUP_World, 1, ranks, MPI_GROUP_Slaves)
call MPI_COMM_CREATE(MPI_COMM_WORLD, MPI_GROUP_Slaves, MPI_COMM_Slaves)
call MPI_REDUCE(Di, Di, blkSize*blkSize*Ny*2, MPI_DOUBLE_PRECISION, MPI_SUM, 0, MPI_COMM_Slaves, iErr)
However, it keeps on reporting segmentation fault when calling MPI_REDUCE
Can anybody tell me why? Is there any tricks in the implementation details of MPI_REDUCE?
Thanks in advance!
Jason
I am working on a project and need to use MPI_REDUCE to collect (and sum up) the arrays calculated on all the computing nodes. Since the array consumes a lot of memory, I dont want to allocate a buffer to store the results. Is it possible to the array allocated on master node (iRank==0) to act as the buffer?
I tried to achieve this goal by doing follows,
call MPI_COMM_GROUP(MPI_COMM_WORLD, MPI_GROUP_World)
ranks(1)=0
call MPI_GROUP_EXCL(MPI_GROUP_World, 1, ranks, MPI_GROUP_Slaves)
call MPI_COMM_CREATE(MPI_COMM_WORLD, MPI_GROUP_Slaves, MPI_COMM_Slaves)
call MPI_REDUCE(Di, Di, blkSize*blkSize*Ny*2, MPI_DOUBLE_PRECISION, MPI_SUM, 0, MPI_COMM_Slaves, iErr)
However, it keeps on reporting segmentation fault when calling MPI_REDUCE
Can anybody tell me why? Is there any tricks in the implementation details of MPI_REDUCE?
Thanks in advance!
Jason