[Beowulf] Really efficient MPIs??

Michael H. Frese Michael.Frese at NumerEx.com
Wed Nov 28 05:49:46 PST 2007


At 10:31 PM 11/27/2007, you wrote:
>Hello,
>
>Because today the clusters with multicore nodes are quite common and 
>the cores within a node share memory.
>
>Which Implementations of MPI (no matter commercial or free), make 
>automatic and efficient use of shared memory for message passing 
>within a node. (means which MPI librarries auomatically communicate 
>over shared memory instead of interconnect on the same node).
>
>regards,
>Ali.
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit 
>http://www.beowulf.org/mailman/listinfo/beowulf

The latest MPICH2 from Argonne (may be version 1.06) complied for the 
ch3:nemesis shared memory device has very low latency -- as low as 
0.06microseconds -- and very high bandwidth.  It beats LAM in 
Argonne's tests. Here are details: 
www.pvmmpi06.org/talks/CommProt/buntinas.pdf, 
<http://info.mcs.anl.gov/pub/tech_reports/reports/P1346.pdf>info.mcs.anl.gov<http://info.mcs.anl.gov/pub/tech_reports/reports/P1346.pdf>/pub/tech_reports/reports/P1346.pdf, 
ftp.mcs.anl.gov/pub/mpi/mpich2-doc-CHANGES.txt.  We are getting 
higher latencies than that on various hardware, so obviously YMMV.


Mike 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20071128/6d06fd1d/attachment.html>


More information about the Beowulf mailing list