[Beowulf] Mutiple IB networks in one cluster
Peter Kjellström
cap at nsc.liu.se
Mon Feb 3 02:39:06 PST 2014
On Thursday, January 30, 2014 11:33:11 AM Prentice Bisbal wrote:
> Beowulfers,
>
> I was talking to a colleague the other day about cluster architecture
> and big data, and this colleague was thinking that it would be good to
> have two separate FDR IB clusters within a single cluster.
Some random thoughts on this:
* By default both IPC and storage (everything) will go over the same IB link
and (worse) same virtual lane (VL). If you have more than one switch in your
fabric then this is probably true even if you connect your nodes using two
different ports on different HCAs.
* Performance problems from mixing storage and IPC is likely to be related to
congestion (stemming from the per VL per link flow control). Not from running
out of bandwidth or message rate.
* Two interfaces on the same ConnectX-3 HCA will not get you additional
bandwidth (_one_ 4x FDR ~= 8x pci-e gen3).
* Subnet Manager (version/config) and VL setup can make a big difference.
/Peter
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20140203/061115c8/attachment.sig>
More information about the Beowulf
mailing list