[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

Rahul Nabar rpnabar at gmail.com
Thu Sep 3 09:28:39 PDT 2009


On Thu, Sep 3, 2009 at 10:19 AM, Gus Correa<gus at ldeo.columbia.edu> wrote:
> See these small SDR switches:
>
> http://www.colfaxdirect.com/store/pc/viewPrd.asp?idcategory=7&idproduct=13
> http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=10
>
> And SDR HCA card:
>

Thanks Gus! This info was very useful. A 24port switch is $2400 and
the card $125. Thus each compute node would be approximately $300 more
expensive. (How about infiniband cables? Are those special and how
expensive. I did google but was overwhelmed by the variety available.)

This isn't bad at all I think. If I base it on my curent node  price
it would require only about a 20% performance boost to justify this
investment. I feel Infy could deliver that. When I had calculated it
the economics was totally off; maybe I had wrong figures.

The price-scaling seems tough though. Stacking 24 port switches might
get a bit too cumbersome for 300 servers. But when I look at
corresponding 48 or 96 port switches the per-port-price seems to shoot
up. Is that typical?

> For a 300-node cluster you need to consider
> optical fiber for the IB uplinks,

You mean compute-node-to-switch and switch-to-switch connections?
Again, any $$$ figures, ballpark?

> I don't know about your computational chemistry codes,
> but for climate/oceans/atmosphere (and probably for CFD)
> IB makes a real difference w.r.t. Gbit Ethernet.

I have a hunch (just a hunch) that the computational chemistry codes
we use haven't been optimized to get the full advantage of the latency
benefits etc. Some of  the stuff they do is pretty bizarre and
inefficient if you look at their source codes (writing to large I/O
files all the time eg.) I know this ought to be fixed but there that
seems a problem for another day!

-- 
Rahul



More information about the Beowulf mailing list