[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

Smith, Brian brs at admin.usf.edu
Thu Sep 3 10:40:52 PDT 2009


Where are you finding cables for $30?  The lowest I've been able to find 1M is in the $60 price range.  I have a project going right now that would benefit greatly from $30 cables.


-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Gilad Shainer
Sent: Thursday, September 03, 2009 12:51 PM
To: Rahul Nabar; Gus Correa
Cc: Bewoulf
Subject: RE: [Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

> > See these small SDR switches:
> >
> >
> > http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=10
> >
> > And SDR HCA card:
> >
> Thanks Gus! This info was very useful. A 24port switch is $2400 and
> the card $125. Thus each compute node would be approximately $300 more
> expensive. (How about infiniband cables? Are those special and how
> expensive. I did google but was overwhelmed by the variety available.)

You can find copper cables from around $30, so the $300 will include the
cable too

> This isn't bad at all I think. If I base it on my curent node  price
> it would require only about a 20% performance boost to justify this
> investment. I feel Infy could deliver that. When I had calculated it
> the economics was totally off; maybe I had wrong figures.

You can always run your app on available users system and see the
performance boost that you will be able to get. For example, you can use
the center (free of charge) -

> The price-scaling seems tough though. Stacking 24 port switches might
> get a bit too cumbersome for 300 servers. But when I look at
> corresponding 48 or 96 port switches the per-port-price seems to shoot
> up. Is that typical?

It is the same as buying blades. If you get the switches fully
populated, than it will be cost effective. There is a 324 port switch,
which should be a good option too.

> > For a 300-node cluster you need to consider
> > optical fiber for the IB uplinks,
> You mean compute-node-to-switch and switch-to-switch connections?
> Again, any $$$ figures, ballpark?

It all depends on the speed. If you are using IB SDR or DDR, copper
cables will be enough. For QDR you can use passive copper up 7-8 meters,
and active up to 12m, before you need to move to fiber. 

> > I don't know about your computational chemistry codes,
> > but for climate/oceans/atmosphere (and probably for CFD)
> > IB makes a real difference w.r.t. Gbit Ethernet.
> I have a hunch (just a hunch) that the computational chemistry codes
> we use haven't been optimized to get the full advantage of the latency
> benefits etc. Some of  the stuff they do is pretty bizarre and
> inefficient if you look at their source codes (writing to large I/O
> files all the time eg.) I know this ought to be fixed but there that
> seems a problem for another day!

On the same web site I have listed above, there are some best practices
with apps performance, You can check them out and see if some of them
are more relevant.

Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list