[Beowulf] Mutiple IB networks in one cluster

Prentice Bisbal prentice.bisbal at rutgers.edu
Fri Jan 31 08:27:21 PST 2014


On 01/30/2014 07:15 PM, Alex Chekholko wrote:
> Hi Prentice,
> Today, IB probably means Mellanox, so why not get their pre-sales
> engineer to draw you up a fabric configuration for your intended use
> case?

Because I've learned that sales people will tell you anything is 
possible with their equipment if it means a sale.
I posted my question to this list instead of talking to Mellanox 
specifically to get real-world, unbiased information.
> Certainly you can have a fabric where each host has two links, and
> then you segregate the different types of traffic on the different
> links.  But what would that accomplish if they're using the same
> fabric?

Doesn't IB use cross-bar switches? If so, the bandwidth between one pair 
of communicating hosts should not be affected by communication between 
another pair of communicating hosts.
> Certainly you can have totally separate fabrics and each host could
> have links to one or more of those.
> If this was Ethernet, you'd comparing separate networks vs multiple
> interfaces on the same network vs bonded interfaces on the same
> network.  Not all the concepts translate directly, the main one being
> the default network layout, Mellanox will suggest a strict fat tree.
> Furthermore, your question really just comes down to performance.
> Leave IB out of it.  You're asking: is an interconnect with such and
> such throughput and latency sufficient for my heterogeneous workload
> comprised of bulk data transfers and small messages.  Only you can
> answer that.

This question does not "come down to performance", and this question is 
specifically about IB, so there's no way to leave IB out of it.

This is really a business/economics question as much as it's about 
performance: Is it possible to saturate FDR IB, and if so, how often 
does it happen? How much will it cost for a larger or second IB switch 
and double the number of cables to make this happen? And how hard will 
it be to set up? Will the increased TCO be justified increase in 
performance? How can I measure the increase in performance? How can I 
measure, in real-time, the load on my IB fabric, and collect that data 
to see if the investment paid off?

Also, the cliche statement "It depends on your application" doesn't 
apply here. This cluster will be available to everyone in a large 
university. I can't predict what will run on it on day 1 or 2 years down 
the road do the the large, diverse user base, and since there hasn't 
been much of an HPC presence here in the past, there's not a lot of 
historical data to review.

> Regards,
> Alex
> On Thu, Jan 30, 2014 at 8:33 AM, Prentice Bisbal
> <prentice.bisbal at rutgers.edu> wrote:
>> Beowulfers,
>> I was talking to a colleague the other day about cluster architecture and
>> big data, and this colleague was thinking that it would be good to have two
>> separate FDR IB clusters within a single cluster: one for message-passing,
>> and the other purely for data movement. I'm a bit skeptical of this myself.
>> I was always under the impression that IB has more than enough bandwidth for
>> message-passing and I/O. I have some questions about this idea:
>> 1. Does this make sense?
>> 2. Does anyone have first hand experience with doing this, or can point me
>> to someone who does (articles on line, papers on the topic will suffice)?
>> 3. Would the present any issues for managing the fabric? I know IB is
>> designed to detect loops automatically, but what about making sure  certain
>> traffic stays on certain IB interfaces.
>> 4. Since IB uses cross-bar switches (please correct me if I'm wrong), we
>> wouldn't need to duplicate switchgear, just double IB connections on each
>> host, correct?
>> --
>> Prentice
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list