[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...
Greg Lindahl
lindahl at pbm.com
Thu Apr 8 11:14:11 PDT 2010
On Thu, Apr 08, 2010 at 04:13:21PM +0000, richard.walsh at comcast.net wrote:
>
> What are the approaches and experiences of people interconnecting
> clusters of more than128 compute nodes with QDR InfiniBand technology?
> Are people directly connecting to chassis-sized switches? Using multi-tiered
> approaches which combine 36-port leaf switches?
I would expect everyone to use a chassis at that size, because it's cheaper
than having more cables. That was true on day 1 with IB, the only question is
"are the switch vendors charging too high of a price for big switches?"
> I am looking for some real world feedback before making a decision on
> architecture and vendor.
Hopefully you're planning on benchmarking your own app -- both the
HCAs and the switch silicon have considerably different application-
dependent performance characteristics between QLogic and Mellanox
silicon.
-- greg
More information about the Beowulf
mailing list