Dolphin Wulfkit

Joachim Worringen joachim at lfbs.RWTH-Aachen.DE
Fri May 3 00:54:30 PDT 2002

Patrick, I agree with your posting - as I basically say the same in the
"disclaimer" of the wwww-page 
with the PMB results. 

Regarding the placement of the process (or: assignment of ranks) on
SMP-nodes is part of the peformance strategiy that an MPI library is
free to follow. It seems that ScaMPI does a straight round-robin mapping
of the process-to-node-ranks, while MPICH-GM "groups" processes on the
same node. Both approaches are, of course, valid.

The attachment of you results wasn't readable for me, can you post it
again or send it via Mail? I will be happy adding it to the page. If you
have "real" results for the P4-i860 platform, please send them, too, and
I'll replace the other ones. As it says in the disclaimer... ;-)

Nevertheless, I think it helps to talk about real numbers - without
saying that these numbers give the complete picture (again, see the
disclaimer). Unfortunately, I have no access to the mixed
Myrinet/SCI-cluster to run application benchmarks. This would also cover
the other performance factors, like the one's Tony does stress so much -
however, it will be hard finding an application that does persistent
communication, unfortunately. On the other hand, a quality MPI
implementation should not show big differences between explicit and
"implicit" persistent communication due to caching of the required
ressources. This is also my personal experience with SCI-MPICH (which
*does* optimize persistent communication - but the only benefit from it
is that the related ressources will be the last thrown out of the
related cache, which usually does not happen to often at all).

BTW, I have a 8node-800MHz-DualPII-SW_LE cluster with SCI here, which
seems to nicely match the one you mentioned. Numbers will be up soon.

 regards, Joachim

|  _  RWTH|  Joachim Worringen
|_|_`_    |  Lehrstuhl fuer Betriebssysteme, RWTH Aachen
  | |_)(_`|
    |_)._)|  fon: ++49-241-80.27609 fax: ++49-241-80.22339

More information about the Beowulf mailing list