Beowulf & Fluid Mechanics
Josip Loncaric
josip at icase.edu
Fri Jun 30 08:25:57 PDT 2000
Nicolas Lardjane wrote:
>
> Hello.
>
> I'd like to know if someone has any experience of using PC cluster for
> solving fluid mechanics problems by domain decomposition methods. The
> question is what performance can be expected compared to
> super-computers ?
On coarse grained problems, our 32 single CPU Pentium II 400MHz boxes
perform about as well as a 16 CPU (R10000, 250MHz, IP28) SGI Origin
2000. However, our cost was 5-10 times lower. See
http://www.icase.edu/CoralProject.html
and particularly Brian Allan's results
http://www.icase.edu/~allan/coral/Nov_99/index.html
Fine grained problems do not work as well (our switched Fast Ethernet
network is a bottleneck for more than 10 nodes). Recently, Giganet
loaned us some hardware that we could test, and it did improve scaling
in such cases (speedup was actually better than on SGI Origin). Brian's
Giganet results are available at
http://www.icase.edu/~allan/coral/June_00/index.html
To me, the most interesting conclusions based on Brian's tests concern
MPI implementationa. MPI/Pro really shows its advantages on dual CPU
machines with a very fast network, despite the fact that MVICH has much
lower latency. We used to blame memory bottlenecks for the 25%
performance penalty typically observed on SMP machines with Fast
Ethernet; but now it appears that this penalty is primarily due to
polling in LAM, MPICH and MVICH. With Giganet and MVICH, the SMP
performance penalty grows to about 40%, almost negating the benefit of
the second CPU. With MPI/Pro, the SMP performance penalty is no longer
there. On the other hand, LAM/MPICH/MVICH implementations work somewhat
better on uniprocessor nodes. Moreover, Giganet latency with MVICH is
only 14 microseconds, much better than MPI/Pro's 86 microseconds. While
we were limited to 16 CPUs in these tests, it appears that MPI/Pro's
higher latency may negate its SMP performance advantage when more than
about 20 CPUs are used (as the number of CPUs grows, more smaller
messages are exchanged, and the test becomes more latency sensitive).
Sincerely,
Josip
--
Dr. Josip Loncaric, Senior Staff Scientist mailto:josip at icase.edu
ICASE, Mail Stop 132C PGP key at http://www.icase.edu./~josip/
NASA Langley Research Center mailto:j.loncaric at larc.nasa.gov
Hampton, VA 23681-2199, USA Tel. +1 757 864-2192 Fax +1 757 864-6134
More information about the Beowulf
mailing list