[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

Gus Correa gus at ldeo.columbia.edu
Thu Sep 3 08:19:33 PDT 2009

Rahul Nabar wrote:

>> 24 port SDR
>> IB switches are available, and relatively inexpensive.
> Is there a approximate $$$ figure someone can throw out? These numbers
> have been pretty hard to get.
>> 24 port SDR PCIe
>> cards are available and relatively inexpensive.
> Ditto. Any $ figures?
> All my calculations boosted up the $ price of a node to a point where
> the performance would have to be very stellar to warrant the spending.
> And really, the plain-vanilla Nehalem ethernet config is not doing too
> badly for us yet. My main concern now is scaling.

Hi Rahul

See these small SDR switches:


And SDR HCA card:


We bought DDR though, but our cluster is small, one 36-port
switch only.

For a 300-node cluster you need to consider
optical fiber for the IB uplinks,
and switches with that capability, or buy the appropriate adapters.
The regular IB cables are length-challenged, most likely can
only be used for node-to-switch connections.

Also, for Opteron, Supermicro (and probably others)
has motherboards with onboard IB adapters, on 1U dual-node chassis.
I wonder if there is something similar for Nehalem.

I don't know about your computational chemistry codes,
but for climate/oceans/atmosphere (and probably for CFD)
IB makes a real difference w.r.t. Gbit Ethernet.
For us there was no point on trading a larger number of nodes
for IB.
OTOH, if your codes run mostly intra-node, there is no advantage
in buying a fast interconnect, but I would doubt your Chem codes
are happy with 8 processes per job only.

Also, with IB, you could dedicate one of your nodes'
Gbit Ether ports to I/O only, with all MPI traffic using IB.

My $0.02
Gus Correa
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA

More information about the Beowulf mailing list