[Beowulf] 1.2 us IB latency?

Peter St. John peter.st.john at gmail.com
Wed Mar 28 08:07:36 PDT 2007


>
> >> also, I'm sorta amazed people keep selling (and presumably buying)
> >> dual-port IB cards.  doesn't that get quite expensive, switch-wise?
> >
> > Not defending them but, It could possibly maybe be useful if you have a
> > stand-alone IB net for, say, storage or something else not mpi. Also,
> it's
> > not like they're that much more expensive than single port ones...
>
> yeah, I can see PHB's buying redundant fabrics.  I'd be more interested in
> using the higher port-count for FNN or related topologies (assuming
> switches
> are cheap, at least at some size...)


I was wondering if Peter K's remark generalized: if there are multiple
ports, the node has a choice, which may be application dependent. One port
for MPI and the other to a disk farm seems clear, but it still isn't obvious
to me that a star topology with few long cables to a huge switch is always
better than many short cables with more ports per node but no switches. (I
myself don't have any feel for how much bottleneck a switch is, just
topologically it seems scary).

I'd been thinking about overlaying a Flat Neighborhood Network with a
Hypercube, so that various sized subclusters could compete to optimize their
topology for an application. But what I imagine building for myself this
summer is too few nodes and would need too many ports/node for me to try
myself anytime soon.

Peter
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scyld.com/pipermail/beowulf/attachments/20070328/506251a5/attachment.html


More information about the Beowulf mailing list