Discussion:
POSIX threads and MPI2
(too old to reply)
Karolski
2008-11-14 14:07:39 UTC
Permalink
Hello All

I've got small problem here with posix threads and MPI. Let me explain
you the situation.

Let's say I've got 4 nodes. My application is starting 4 POSIX
threads. Each thread is waiting for messages to come from other node.
It's like this:
thread0 is waiting for messages form node 0
thread1 is waiting for messages form node 1
thread2 is waiting for messages form node 2
thread3 is waiting for messages form node 3
By "waiting" I mean code like below:

MPI::COMM_WORLD.Barrier(); // sync all threads here
while(1)
{
MPI::COMM_WORLD.Recv(&msg, 1, MPI::INT, listen_rank, tag);
// do something
}

Of course listen_rank in each thread is correctly set.

But every time I'm starting up my application, some threads throws
MPI::Exceptions, assertion failed or some other errors. So it looks
like it's not good way to receive messages from other nodes. Can
anyone explain me, what's wrong in this approach to the parallel
communication?

The thing is that cannot say where and when any of nodes will send the
message and who will be the receiver. So I thought that if I create n-
threads in each node, each thread would receive messages from one
node, I would got the situation that every node has a set of threads
able to receive messages from anybody else in the group. What do you
think about this?

Many thanks in advance for any suggestions,
Regards
Georg Bisseling
2008-11-14 15:38:41 UTC
Permalink
I am not even sure if we understand the terms node, MPI process and thread
in the same way.

This line
Post by Karolski
MPI::COMM_WORLD.Barrier(); // sync all threads here
makes no sense in the MPI context. A barrier synchronizes
all MPI processes that are part of the communicator used.

Do you really have 4 threads in one MPI process that will
all call this barrier? This will not work as you expect.

Please consult the documentation for MPI_Init_thread about the
usage of threads in an MPI program.
Post by Karolski
Hello All
I've got small problem here with posix threads and MPI. Let me explain
you the situation.
Let's say I've got 4 nodes. My application is starting 4 POSIX
threads. Each thread is waiting for messages to come from other node.
thread0 is waiting for messages form node 0
thread1 is waiting for messages form node 1
thread2 is waiting for messages form node 2
thread3 is waiting for messages form node 3
MPI::COMM_WORLD.Barrier(); // sync all threads here
while(1)
{
MPI::COMM_WORLD.Recv(&msg, 1, MPI::INT, listen_rank, tag);
// do something
}
Of course listen_rank in each thread is correctly set.
But every time I'm starting up my application, some threads throws
MPI::Exceptions, assertion failed or some other errors. So it looks
like it's not good way to receive messages from other nodes. Can
anyone explain me, what's wrong in this approach to the parallel
communication?
The thing is that cannot say where and when any of nodes will send the
message and who will be the receiver. So I thought that if I create n-
threads in each node, each thread would receive messages from one
node, I would got the situation that every node has a set of threads
able to receive messages from anybody else in the group. What do you
think about this?
Many thanks in advance for any suggestions,
Regards
--
This signature intentionally left almost blank.
http://www.this-page-intentionally-left-blank.org/
Karolski
2008-11-14 19:47:15 UTC
Permalink
Post by Georg Bisseling
I am not even sure if we understand the terms node, MPI process and thread
in the same way.
Node - I mean computer, machine.
Thread as lightweight process, POSIX thread.
Post by Georg Bisseling
This line> MPI::COMM_WORLD.Barrier(); // sync all threads here
makes no sense in the MPI context. A barrier synchronizes
all MPI processes that are part of the communicator used.
Do you really have 4 threads in one MPI process that will
all call this barrier? This will not work as you expect.
To be honest, barrier is not much important thing as you think. It's
not the point. The point is I'm trying to receive messages in 4
different threads.
Post by Georg Bisseling
Please consult the documentation for MPI_Init_thread about the
usage of threads in an MPI program.
Well, in fact I still use simple POSIX threads, not MPI threads. Does
it matter?

Thanks.
Regards
Georg Bisseling
2008-11-15 19:28:18 UTC
Permalink
Post by Karolski
Node - I mean computer, machine.
Thread as lightweight process, POSIX thread.
OK, here we agree.
Post by Karolski
Well, in fact I still use simple POSIX threads, not MPI threads. Does
it matter?
There is no such thing as an "MPI thread". The whole programming model
of MPI is built on processes. And then of course on communicators that
contain processes.

To write a meaningful MPI program that uses threads you will have to
decide if and how the several threads will call MPI functions and map
that to the different levels of thread support that are defined in the
MPI standard. At runtime you will have to call MPI_Init_thread to query
if the MPI implementation (with the current settings) provides the thread
support level that you need.

For details see
http://www.mpi-forum.org/docs/mpi-2.1/mpi-report-2.1-2008-06-23-black.pdf
section "12.4 MPI and Threads"

Another thing: I suggest to use only the C and not the C++ bindings of
MPI because the C++ bindings tend to create expectations that MPI can
not fulfill. Remember that MPI was designed to fit Fortran!
--
This signature was left intentionally almost blank.
http://www.this-page-intentionally-left-blank.org/
Karolski
2008-11-15 20:44:26 UTC
Permalink
Post by Georg Bisseling
There is no such thing as an "MPI thread". The whole programming model
of MPI is built on processes. And then of course on communicators that
contain processes.
Agree, My fault.
Post by Georg Bisseling
To write a meaningful MPI program that uses threads you will have to
decide if and how the several threads will call MPI functions and map
that to the different levels of thread support that are defined in the
MPI standard. At runtime you will have to call MPI_Init_thread to query
if the MPI implementation (with the current settings) provides the thread
support level that you need.
Yep. I've resolved my problem.
First of all, I used
int provided = MPI::Init_thread(argc, argv, MPI::THREAD_MULTIPLE);
rather than MPI_Init(). Next, in one thread I'm sending the messages
using just Send() function. Second thread is receiving messages:

do {
flag = MPI::COMM_WORLD.Iprobe
(MPI::ANY_SOURCE,MPI::ANY_TAG);
}
while(!flag);

MPI::COMM_WORLD.Recv(&msg,
1,MPI::INT,MPI::ANY_SOURCE,MPI::ANY_TAG);

Looks fine so far and it's working!
Post by Georg Bisseling
For details seehttp://www.mpi-forum.org/docs/mpi-2.1/mpi-report-2.1-2008-06-23-black...
section "12.4 MPI and Threads"
Thanks for this.
Post by Georg Bisseling
Another thing: I suggest to use only the C and not the C++ bindings of
MPI because the C++ bindings tend to create expectations that MPI can
not fulfill. Remember that MPI was designed to fit Fortran!
Yep, I remember. But that's not good.... Whole my application is
written in C++ so it was very clear for me to use C++ bindings rather
than C. Of course there's nothing worng with combining C++ and C,
but.... ok, I will think about it for a while later. :)

Regards
Georg Bisseling
2008-11-17 11:43:53 UTC
Permalink
Post by Karolski
do {
flag = MPI::COMM_WORLD.Iprobe
(MPI::ANY_SOURCE,MPI::ANY_TAG);
}
while(!flag);
MPI::COMM_WORLD.Recv(&msg,
1,MPI::INT,MPI::ANY_SOURCE,MPI::ANY_TAG);
You could do without the loop altogether.
Apart from elegance there are two other reasons
why you may want to do that.

Under certain circumstances this will allow
MPI to wait for the message without burning
CPU cycles.

And: sometimes polling with iprobe simply
does not work. Don't tell anybody that I said
that...
--
This signature intentionally left almost blank.
http://www.this-page-intentionally-left-blank.org/
Loading...