Discussion:
Looking for perl module for infiniban
(too old to reply)
Carole
2008-03-21 18:37:03 UTC
Permalink
Hi,
I am looking for a perl module that works with Mvapich for a cluster
using Infiniban. Before we switched to infiniban the old cluster was
able to use parallel::MPI but this will not work with my mvapich and
is no longer supported by the authors.

Any places to look, advice, ideas would be appreciated. My only other
option being considered is me rewriting it in C++ but that is not
really an avenue I wish to take.
Thanks, Carole
Georg Bisseling
2008-03-21 19:06:23 UTC
Permalink
Would it be an option for you to install an TCP/IP
capable driver for your Infiniband cards?

Doing that you could use the cards in the same way
as Ethernet cards, with just the same software.

Another option could be to use another MPI that
can handle Infiniband (HP MPI, Intel MPI, Open MPI...)
instead of mvapich.
--
This signature was left intentionally almost blank.
http://www.this-page-intentionally-left-blank.org/
Carole
2008-03-21 19:46:50 UTC
Permalink
Thanks for responding. We are using TCP/IP capable driver. My sys
admin says he was considering Open MPI instead of mvapich because he
had heard positive things from other sa's . So maybe we will try that
route first. Thanks for the advice. Carole
Carole
2008-04-15 20:22:43 UTC
Permalink
Post by Georg Bisseling
Would it be an option for you to install an TCP/IP
capable driver for your Infiniband cards?
Doing that you could use the cards in the same way
as Ethernet cards, with just the same software.
Another option could be to use another MPI that
can handle Infiniband (HP MPI, Intel MPI, Open MPI...)
instead of mvapich.
--
This signature was left intentionally almost blank.http://www.this-page-intentionally-left-blank.org/
Well just a follow up. We installed the open MPI. It seems to run fine
as long as we run outside of PBS. However, when I run monte carlo's on
the cluster with infiniban it is slower than the original cluster
without infiniban. However when I run outside of PBS the run times are
like 20% faster with infiniban. I am not sure what is happening with
it but if I figure it out will let you know. thanks Georg for the
heads up on Open MPI.
Carole
Georg Bisseling
2008-04-17 13:37:57 UTC
Permalink
Post by Carole
Post by Georg Bisseling
Would it be an option for you to install an TCP/IP
capable driver for your Infiniband cards?
Doing that you could use the cards in the same way
as Ethernet cards, with just the same software.
Another option could be to use another MPI that
can handle Infiniband (HP MPI, Intel MPI, Open MPI...)
instead of mvapich.
--
This signature was left intentionally almost blank.http://www.this-page-intentionally-left-blank.org/
Well just a follow up. We installed the open MPI. It seems to run fine
as long as we run outside of PBS. However, when I run monte carlo's on
the cluster with infiniban it is slower than the original cluster
without infiniban. However when I run outside of PBS the run times are
like 20% faster with infiniban. I am not sure what is happening with
it but if I figure it out will let you know. thanks Georg for the
heads up on Open MPI.
Carole
Is it possible, that PBS creates a configuration that ends up using Ethernet?

Maybe some PBS daemons/wathdogs awake regularly to what they have to?
If you have enough nodes then there will always be one node in a collective
operation that is slower than the others. I saw a similar effect causing
a slowdown of 10-15% in 8 nodes.
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Loading...