[Beowulf] Infiniband: How to go beyond the 24-port barrier?
Gus Correa
gus at ldeo.columbia.edu
Mon Aug 25 15:20:56 PDT 2008
Hello Beowulf fans and network pros
Imagine a cluster with 24 compute nodes, one head node, and one storage
node.
Let's say that one wants to install Infiniband (IB) and use it for MPI
and/or
for NFS or parallel file system services.
The price of IB switches is said to rise sharply beyond 24-ports.
Questions:
1) What is a cost-effective yet efficient way to connect this cluster
with IB?
2) How many switches are required, and of which size?
3) How should these switches be connected to the nodes and to each
other, which topology?
4) Does the same principle and topology apply to Ethernet switches?
If anyone has a pointer to an article or a link to web page that
explains this,
just send it to me please, don't bother to answer the questions.
My (in)experience is limited to small clusters with a single switch,
but hopefully the information will help other list subscribers in the
same situation.
I saw a 24+1-node IB cluster with the characteristics above -
except that the head node seems to double as storage node.
Each node has a single IB port.
The cluster has *four* 24-port IB switches.
One switch has 24 ports connected, two others have 16 ports connected,
and the last one has 17 ports connected.
Hard to figure out the topology just looking at the connectors and the
tightly bundled cables.
In my naive thoughts the job could be done with two switches only.
Thank you
Gus Correa
--
---------------------------------------------------------------------
Gustavo J. Ponce Correa, PhD - Email: gus at ldeo.columbia.edu
Lamont-Doherty Earth Observatory - Columbia University
P.O. Box 1000 [61 Route 9W] - Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
More information about the Beowulf
mailing list