Discussion:
Trucating last objects for c++2MPI
(too old to reply)
uzzal
2008-05-23 14:36:41 UTC
Permalink
Hi,

when I am using the following code, trying to send EdgeVector, it
always truncate the last element. For example, if I send 4 edges to
the other processors, then the others receive 4 edges, but the last
one of these four edges contains no value. and this happens for any
number of edges. I dont know why?

Can anyone kindly help?

regards,
Uzzal.

CODE
----------
class CEdge
{
public:
CEdge(){ }
virtual ~CEdge(){ }

long getNumofLong(){ return 3; }

public:
long m_lEdgeID; // this is not mandatory, can be deleted future
long m_lVertex1ID;
long m_lVertex2ID;
};
typedef std::vector <CEdge> EdgeVector;
----------------------------------------------------------------
#include "mpi.h"
#include <stdio.h>
#include <vector>
#define COUNTEDGE 3

int main(int argc,char* argv[])
{

MPI_Init(&argc,&argv); // initialize MPI
static MPI_Datatype m_MPIEdgeType;

CEdge edTest;
long lCount = 3;

int block_lengths[lCount];
MPI_Aint displacements[lCount];
MPI_Datatype typelist[lCount];
MPI_Aint start_address;
MPI_Aint address1, address2;

long lIndex;

block_lengths[0] = block_lengths[1] = block_lengths[2] = 1 ;
typelist[0] = MPI_LONG;
typelist[1] = MPI_LONG;
typelist[2] = MPI_LONG;

MPI_Address(&edTest.m_lEdgeID, &start_address);
displacements[0] = 0;

MPI_Address(&edTest.m_lVertex1ID, &address1);
displacements[1] = address1 - start_address;

MPI_Address(&edTest.m_lVertex2ID, &address2);
displacements[2] = address2 - start_address;

MPI_Type_struct(3, block_lengths, displacements, typelist,
&m_MPIEdgeType);
MPI_Type_commit(&m_MPIEdgeType);


int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank); // get the proc ID
int nProcCount;
MPI_Comm_size(MPI_COMM_WORLD, &nProcCount); // get processor count

if(rank == 0)
{
CEdge xEdges[COUNTEDGE];

for(lIndex = 0; lIndex < COUNTEDGE; lIndex++)
{
CEdge ed;
ed.m_lEdgeID = lIndex;
ed.m_lVertex1ID = 2 * lIndex + 1;
ed.m_lVertex2ID = 2 * lIndex + 100;
xEdges[lIndex] = ed;
}

long lLocalEdgeCount = COUNTEDGE;

long lProcIndex = nProcCount - 1;

for(;lProcIndex > 0; lProcIndex--)
{
MPI_Send(&lLocalEdgeCount, 1, MPI_LONG, lProcIndex, 1,
MPI_COMM_WORLD);
MPI_Send(xEdges, lLocalEdgeCount, m_MPIEdgeType, lProcIndex, 2,
MPI_COMM_WORLD);
}
}
else
{
long lLocalEdgeCount;
MPI_Status stat;
int count;

// receive the edge count
MPI_Recv(&lLocalEdgeCount, 1, MPI_LONG, 0, 1, MPI_COMM_WORLD, &stat);
CEdge aRecEd[lLocalEdgeCount];

// receive the edge set
MPI_Recv(aRecEd, lLocalEdgeCount, m_MPIEdgeType, 0, 2, MPI_COMM_WORLD,
&stat);

// get the size of the message
MPI_Get_count(&stat, m_MPIEdgeType, &count);

printf("Proc: %ld : Received Edge Count:---- %ld\n", rank, count);


for(long lIndex = 0; lIndex < lLocalEdgeCount; lIndex++)
{
printf("Proc: %ld: ID : %ld Edge - %ld %ld\n", rank, lIndex +1 ,
aRecEd[lIndex].m_lVertex1ID, aRecEd[lIndex].m_lVertex2ID);
}
}

// free the edge type and vertex type
MPI_Type_free(&m_MPIEdgeType);
MPI_Finalize(); // end MPI
}
Michael Hofmann
2008-05-26 11:44:22 UTC
Permalink
Post by uzzal
Can anyone kindly help?
The memory layout of your class "CEdge" looks approx. like this:

CEdge =3D | ... | lEdgeID | lVertex1ID | lVertex2ID | ... |
| A | B | C |

Your MPI datatype covers only area B and assumes that the areas A and C =
are empty (have size 0).

1. You use a displacement of 0 for 'lEdgeID' and 'xEdges' for the starti=
ng address of the send-buffer. This fails if A is not empty. Use the cor=
rect displacement of 'lEdgeID' in 'CEdge' or use '&xEdges.lEdgeID' for t=
he starting address of the send-buffer. The same problem occurs for the =
receive-buffer.

2. Since your MPI datatype covers only area B, MPI_Send assumes that the=
data of the second edge starts right behind the first edge and so on (w=
ithout any holes). This fails if A and C are not empty. A solution to th=
is problem is to use a upper-bound marker (e.g. http://www-unix.mcs.anl.=
gov/mpi/www/www3/MPI_Type_struct.html).

In general, both problems can be solved using "Lower-bound and upper-bou=
nd markers" (http://www-unix.mcs.anl.gov/mpi/mpi-standard/mpi-report-1.1=
/node57.htm).


Michael
uzzal
2008-05-26 14:09:24 UTC
Permalink
Post by uzzal
Can anyone kindly help?
CEdge = | ... | lEdgeID | lVertex1ID | lVertex2ID | ... |
         |  A  |                B                  |  C  |
Your MPI datatype covers only area B and assumes that the areas A and C are empty (have size 0).
1. You use a displacement of 0 for 'lEdgeID' and 'xEdges' for the starting address of the send-buffer. This fails if A is not empty. Use the correct displacement of 'lEdgeID' in 'CEdge' or use '&xEdges.lEdgeID' for the starting address of the send-buffer. The same problem occurs for the receive-buffer.
2. Since your MPI datatype covers only area B, MPI_Send assumes that the data of the second edge starts right behind the first edge and so on (without any holes). This fails if A and C are not empty. A solution to this problem is to use a upper-bound marker (e.g.http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Type_struct.html).
In general, both problems can be solved using "Lower-bound and upper-bound markers" (http://www-unix.mcs.anl.gov/mpi/mpi-standard/mpi-report-1.1/node57.htm).
Michael
I was trying to modify my code. but now it send partially still. Can
you kindly tell me where to modify? I am sorry for this, but I am very
new in MPI. And can you kindly send me a link where I can get example
of UB or LB?

Regards,
- Mostofa
Michael Hofmann
2008-05-26 14:54:26 UTC
Permalink
Post by uzzal
I was trying to modify my code. but now it send partially still. Can
you kindly tell me where to modify? I am sorry for this, but I am very=
new in MPI.
Using MPI_UB to do the "padding" is sufficient in your case. Refering to=
the code in your first message, you need the following changes:

long lCount =3D 4;

block_lengths[3] =3D 1;
typelist[3] =3D MPI_UB;
displacements[3] =3D sizeof(CEdge);

MPI_Type_struct(4, ...);

MPI_Send(&xEdges[0].m_lEdgeID, ...);

MPI_Recv(&aRecEd[0].m_lEdgeID, ...);
Post by uzzal
And can you kindly send me a link where I can get example
of UB or LB?
http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Type_struct.html

http://www-unix.mcs.anl.gov/mpi/mpi-standard/mpi-report-1.1/node61.htm


Michael

Loading...