Klaas Vantournhout
2007-11-01 16:14:41 UTC
Hi all,
I am currently reading several tutorials of MPI and testing a bit with
some simple examples. But after all this reading and looking at
examples something is puzzling me which I can not figure out.
A simple program looks like this
----
#include <mpi.h>
int main(void) {
// PART 1
// some declarations here
MPI_Init(argc, argv);
// PART 2
// your parallel stuff here
MPI_Finalize();
// PART 3
// some stuff
return 0;
}
---
What I am wondering about is the following.
What is the use of PART 1 and PART 3 ?
When you write the following simple program, compile and execute it
--- test1.cpp ---
#include <iostream>
#include <mpi.h>
int main(void) {
std::cout << "foo" << std::endl;
return 0;
}
---
$ mpic++ -o test1.out test1.cpp
$ mpirun -np 4 test1.out
foo
foo
foo
foo
Why is this executed on all 4 processors? I would expect this to be
executed on only 1 processor since we did not initialize MPI yet.
So if mpirun anyway starts a program immediately on all processors doing
everything in PART 1, what is then the use of PART 1?
And as I see it, can't we just move everything from PART1 under PART2
then? And if so, what is then actually the use of MPI_Init(argc, argv)?
The first thing I have seen in the example programs is an immediate call
to MPI_Init, why do you need it then? Can't mpirun or mpiexec not just
do this Initialisation?
The same reasoning for PART3 and MPI_Finalize();
Or am I just missing something crucial here, which I guess is the case.
Thanks for the help.
Klaas
I am currently reading several tutorials of MPI and testing a bit with
some simple examples. But after all this reading and looking at
examples something is puzzling me which I can not figure out.
A simple program looks like this
----
#include <mpi.h>
int main(void) {
// PART 1
// some declarations here
MPI_Init(argc, argv);
// PART 2
// your parallel stuff here
MPI_Finalize();
// PART 3
// some stuff
return 0;
}
---
What I am wondering about is the following.
What is the use of PART 1 and PART 3 ?
When you write the following simple program, compile and execute it
--- test1.cpp ---
#include <iostream>
#include <mpi.h>
int main(void) {
std::cout << "foo" << std::endl;
return 0;
}
---
$ mpic++ -o test1.out test1.cpp
$ mpirun -np 4 test1.out
foo
foo
foo
foo
Why is this executed on all 4 processors? I would expect this to be
executed on only 1 processor since we did not initialize MPI yet.
So if mpirun anyway starts a program immediately on all processors doing
everything in PART 1, what is then the use of PART 1?
And as I see it, can't we just move everything from PART1 under PART2
then? And if so, what is then actually the use of MPI_Init(argc, argv)?
The first thing I have seen in the example programs is an immediate call
to MPI_Init, why do you need it then? Can't mpirun or mpiexec not just
do this Initialisation?
The same reasoning for PART3 and MPI_Finalize();
Or am I just missing something crucial here, which I guess is the case.
Thanks for the help.
Klaas