Discussion:
scattering datatypes that overlap each other
(too old to reply)
ortie
2008-08-21 12:11:26 UTC
Permalink
Hi,
I thought myself an intermediate MPI, but I've just been demoted ...
all I am trying to do is distribute non contiguous blocks from root to
other processes, and I defined a Type_vector, for which I use
Scatterv, but I now see that the elemtns of the dataype have to be
contguous on that datatype. Not explaining myself properly? Here's my
code: it's dead simple really, a 4x4 matrix (called B), sending a
different 2x2 block to each of 4 processors (including root) which
will be stored in Bportion, their chunk of B:
[CODE]
displacements = calloc(4,sizeof(int));
counts = calloc(4,sizeof(int));
for (i=0;i<2;i++)
for (j=0;j<2;j++) {
displacements[i*2 +j] = i*2+j;
counts[i*2 +j] = 1;
}
MPI_Type_vector(2, 2, 4, MPI_FLOAT, &blockType);
MPI_Type_commit(&blockType);
MPI_Scatterv(B, counts, displacements, blockType, Bportion, 2*2,
MPI_FLOAT, 0 , MPI_COMM_WORLD);
[/CODE]
The rest of the code is running fine, but the datatype I have defined,
blockType will only allow the next element to begin at the very end of
its extent, and not in the middle somewhere. So I get the incorrect
result each time.

I've been over the MPI reference book in the relevant section, but it
doesn't quite hit this spot.

All through the hours I'll spent on this, I think I'm missing
somethign fundamental about dataypes.

Any help, suggestions appreciated.
cheers.
Georg Bisseling
2008-08-22 14:38:35 UTC
Permalink
The displacements in MPI_Scatterv are given in units of the extend of the
used datatype, blockType in this example.

So what is the extent of blockType? My interpretation is that it is
6 times the size of float, not 4, not 8. This could be checked by using
the (deprecated) function MPI_Type_get_extent.

I see no clear path how to use blockType in MPI_Scatterv to accomplish
what you want to do. Even with LB/UB trickery. Blame it on me.

Maybe you should have a look at MPI_Type_create_subarray that seems to
be targeted at your problem. But this too will end up in having a loop
sending to each receiver instead of using a single collective.

hth
Georg
Post by ortie
Hi,
I thought myself an intermediate MPI, but I've just been demoted ...
all I am trying to do is distribute non contiguous blocks from root to
other processes, and I defined a Type_vector, for which I use
Scatterv, but I now see that the elemtns of the dataype have to be
contguous on that datatype. Not explaining myself properly? Here's my
code: it's dead simple really, a 4x4 matrix (called B), sending a
different 2x2 block to each of 4 processors (including root) which
[CODE]
displacements = calloc(4,sizeof(int));
counts = calloc(4,sizeof(int));
for (i=0;i<2;i++)
for (j=0;j<2;j++) {
displacements[i*2 +j] = i*2+j;
counts[i*2 +j] = 1;
}
MPI_Type_vector(2, 2, 4, MPI_FLOAT, &blockType);
MPI_Type_commit(&blockType);
MPI_Scatterv(B, counts, displacements, blockType, Bportion, 2*2,
MPI_FLOAT, 0 , MPI_COMM_WORLD);
[/CODE]
The rest of the code is running fine, but the datatype I have defined,
blockType will only allow the next element to begin at the very end of
its extent, and not in the middle somewhere. So I get the incorrect
result each time.
I've been over the MPI reference book in the relevant section, but it
doesn't quite hit this spot.
All through the hours I'll spent on this, I think I'm missing
somethign fundamental about dataypes.
Any help, suggestions appreciated.
cheers.
--
This signature intentionally left almost blank.
http://www.this-page-intentionally-left-blank.org/
ortie
2008-08-22 15:03:33 UTC
Permalink
Ok, thank you very much Georg, for your reply.
I came back here after spending more time on this, because I SIFTTB:
"solved it for the time being"
My trick was to use simple sends and receives, and not scatterv. In
this way, the displacements can be in terms of the floats and not the
created datatype.

So now my program gives the correct answer. Blame you? Hehe, no.
Actually, I think MPI version 1 and the reference guide is generally
good. But sometimes, as in this case, the reference guide limits
itself to stating the rules, and giving examples, that are good, but
that play to certain strengths. OK, everybody is guilty of that. In
fact, if you're *only* guilty of that, you're doing OK.

I'm surprised I ran into these problems, actually ...like I said, I
thought I was an intermediate MPI user.

Anyway enough chatter, I have it working. Onto next thing. Much
obliged for answer!
Michael Hofmann
2008-08-22 17:24:58 UTC
Permalink
As already mentioned, the problem is/was the extent of your derived
datatype.

However, there is an official (MPI reference guide) example how to change
the extent of a datatype: 4th example,
http://www.mpi-forum.org/docs/mpi-11-html/node61.html#Node61

It can be achieved by using a second struct-datatype (like an envelope).
MPI_UB is used to specify the "upper-bound" of the new datatype.

int envBlocklens[2], envDispls[2];
MPI_Datatype envTypes[2], envType;

envBlocklens[0] = 1;
envBlocklens[1] = 1;
envDispls[0] = 0;
envDispls[1] = sizeof(float);
envTypes[0] = blockType;
envTypes[1] = MPI_UB;

MPI_Type_struct(2, envBlocklens, envDispls, envTypes, &envType);
MPI_Type_commit(&envType);

After that, you can specify the displacements of the 2x2 blocks of your
4x4 matrix in MPI_FLOAT steps.

for (i=0;i<2;i++)
for (j=0;j<2;j++)
{
displacements[i*2 +j] = i*8+j*2;
^^^^^^^^
}

Finally, use "envType" instead of "blockType", the rest is the same.


Michael
ortie
2008-08-22 21:47:56 UTC
Permalink
Many thanks Michael, I'll have a look at that. Will be very useful.
Loading...