MPI's on two NIC clusters

Joachim Worringen joachim at lfbs.RWTH-Aachen.DE
Tue Feb 5 00:27:21 PST 2002


Slick . wrote:
> 
> Couple of questions,
> 
> 1. Does channel bonding(kernel patch) take care of MPI's for two NIC beowulf architectures ? Or do i need to do something else ?

If the two NICs appear as one network device, you should be ready to go. 

> 2. Is MP_lite better than MPI in terms of performance ? or is it application dependent ?

It depends on the MPI implementation and the network, also on the memory
bandwidth of the system, and for a minor part on the CPU speed. A good
MPI-implementation will add very little overhead to the raw network
delay. The MPI-implementation on top of SCI that I have developed
(SCI-MPICH) approximates 100% of the raw SCI bandwidth for large
messages, and adds less then 3us latency for small messages, while
giving full, unrestricted MPI semantics. YMMV for other MPI
implementations. 

I wouldn't bother with MP_lite if a decent full MPI implementation is
available for the plattform.

 Joachim

-- 
|  _  RWTH|  Joachim Worringen
|_|_`_    |  Lehrstuhl fuer Betriebssysteme, RWTH Aachen
  | |_)(_`|  http://www.lfbs.rwth-aachen.de/~joachim
    |_)._)|  fon: ++49-241-80.27609 fax: ++49-241-80.22339



More information about the Beowulf mailing list