[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

Craig Tierney Craig.Tierney at noaa.gov
Thu Apr 8 11:42:49 PDT 2010


richard.walsh at comcast.net wrote:
> 
> All, 
> 
> 
> What are the approaches and experiences of people interconnecting 
> clusters of more than128 compute nodes with QDR InfiniBand technology? 
> Are people directly connecting to chassis-sized switches? Using multi-tiered 
> approaches which combine 36-port leaf switches? What are your experiences? 
> What products seem to be living up to expectations? 
> 
> 
> I am looking for some real world feedback before making a decision on 
> architecture and vendor. 
> 
> 

We have been telling our vendors to design a multi-level tree using
36 port switches that provides approximately 70% bisection bandwidth.
On a 448 node Nehalem cluster, this has worked well (weather, hurricane, and
some climate modeling).  This design (15 up/21 down) allows us to
scale the system to 714 nodes.

Craig






> Thanks, 
> 
> 
> rbw 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list