Discussion:
Thread support (using OpenMP) in OpenMPI
(too old to reply)
l***@yahoo.com
2008-09-05 00:55:07 UTC
Permalink
Hi,

I have the following code to test if I am using threads (using OpenMP)
correctly in OpenMPI:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <omp.h>

int main(int argc, char **argv)
{
int rank, i, tag1 = 1, tag2 = 2, state, state2 = 5, check;
int check2 = 5, size, id;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_rank(MPI_COMM_WORLD, &size);
state = rank;
check = rank*2;

if (rank == 0)
{
#pragma omp parallel sections private(id, i)
{
id = omp_get_thread_num();
printf("%d TID: %d\n", rank, id);
#pragma omp section
{
for (i = 1 ; i < size ; i++)
{
MPI_Recv(&state2, 1, MPI_INT, MPI_ANY_SOURCE, tag1,
MPI_COMM_WORLD, &status);
printf("%d received state: %d from %d\n", rank, state2,
status.MPI_SOURCE);
}
}
#pragma omp section
{
for (i = 1 ; i < size ; i++)
MPI_Send(&state, 1, MPI_INT, i, tag1, MPI_COMM_WORLD);
}
#pragma omp section
{
MPI_Recv(&check2, 1, MPI_INT, 1, tag2, MPI_COMM_WORLD,
&status);
}
#pragma omp section
printf("%d get_num_threads: %d\n", rank,
omp_get_num_threads());
}
}
else
{
#pragma omp parallel sections private(id, i)
{
id = omp_get_thread_num();
printf("%d TID: %d\n", rank, id);
#pragma omp section
{
MPI_Send(&state, 1, MPI_INT, 0, tag1, MPI_COMM_WORLD
}
#pragma omp section
{
MPI_Recv(&state2, 1, MPI_INT, 0, tag1, MPI_COMM_WORLD,
&status);
}
#pragma omp section
{
MPI_Send(&check, 1, MPI_INT, 0, tag2, MPI_COMM_WORLD);
}
#pragma omp section
printf("%d get_num_threads: %d\n", rank,
omp_get_num_threads());
}
}
MPI_Finalize();
return 0;

}

Then I compiled the program using "mpicc mpithreadtest.c -fopenmp" and
ran it using 2 nodes.

I got the correct output, i.e. the sending and receiving of the
variables state2 and check2 were correct. However, at this point, I'm
still uncertain if the program ran using threads or simply ignored the
pragmas.

So I replaced all the MPI_Send/MPI_Recv with empty for loops and ran
the program again, using 2 nodes. Using System Monitor, all 4 CPUs
have a usage of at least 40% to 50%. Then I removed all the pragmas
and replaced them with empty for loops, that is, each process now does
not create/run any additional threads. I ran the program using 2 nodes
and this time, System Monitor shows that only 2 CPUs have a usage of
at least 80%. The working CPUs seem to change a lot, i.e. from CPU1 to
CPU4 to CPU3 etc, and occassionally one other CPU has a usage of about
10-15%, but there is always 1 CPU with 0% usage.

Does this mean that my MPI program is using threads correctly? But my
installation of openMPI does not support threads - running "ompi_info
| grep Thread" gives me "Thread support: posix (mpi: no, progress:
no)". Or does the MPI thread support refer only to posix threads, but
running openMP using MPI is simply letting the OS fork new threads in
each process, without the actual involvement of MPI?

Another question: do I have to uninstall and reinstall OpenMPI to
enable thread support? My ./configure is located in /Desktop/Non-
Shared/Documents/Installations/openmpi-1.2.6, while my OpenMPI
installation (i.e. all the library files, mpiexec etc) is in ~/usr/
lib64/openmpi/1.2.5-gcc. I've tried simply running ./configure in the /
Desktop/Non-Shared/Documents/Installations/openmpi-1.2.6 with the "--
enable-mpi-threads" option and it didn't work. If I do have to
reinstall, how should I do that?

Thank you.

Regards,
Rayne
Georg Bisseling
2008-09-05 14:57:29 UTC
Permalink
I guess you should compile either OpenMPI or MPICH2 with
thread support enabled. You will find directions on how to
do that on their respective web sites.

To avoid mixing headers, scripts and libraries between
your self-made MPI and the one that came with your
Linux distribution, you should deinstall the MPI packages
from your distribution.

After having configured, compiled and installed your
MPI (expect to do that several times) you should write
a small test program that uses MPI_Init_thread
correctly to initialize with the thread support level of
MPI_THREAD_MULTIPLE.

If that works then you are ready to compile a so called
hybrid program that uses MPI and OpenMP.

I do not understand your question about the ignored #pragmas.
Did your program print out a higher number than 1 for the
threads or not?
Hint: some versions of top are able to show you the threads
by pressing "H". But all have help built in. And of course
you can print omp_get_thread_num() in each section to see
OpenMP's version of a thread id.

To me it seems that you are fighting at too many frontiers
at once: administering Linux, programming with OpenMP and
programming with MPI. Pardon me if I am wrong, but I suggest
to enjoy all possibilities to shoot yourself in the foot with
OpenMP and then MPI separately before going to the full orgy.


ciao
Georg

Loading...