<div dir="ltr">On Fri, Jan 31, 2014 at 11:27 AM, Prentice Bisbal <span dir="ltr"><<a href="mailto:prentice.bisbal@rutgers.edu" target="_blank">prentice.bisbal@rutgers.edu</a>></span> wrote:<br><div class="gmail_extra">
<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Alex,<div class="im"><br>
<br>
On 01/30/2014 07:15 PM, Alex Chekholko wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Prentice,<br>
<br>
Today, IB probably means Mellanox, so why not get their pre-sales<br>
engineer to draw you up a fabric configuration for your intended use<br>
case?<br>
</blockquote>
<br></div>
Because I've learned that sales people will tell you anything is possible with their equipment if it means a sale.<br>
I posted my question to this list instead of talking to Mellanox specifically to get real-world, unbiased information.<div class="im"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Certainly you can have a fabric where each host has two links, and<br>
then you segregate the different types of traffic on the different<br>
links. But what would that accomplish if they're using the same<br>
fabric?<br>
</blockquote>
<br></div>
Doesn't IB use cross-bar switches? If so, the bandwidth between one pair of communicating hosts should not be affected by communication between another pair of communicating hosts.</blockquote><div><br></div><div>The cross-bar switch only guarantees non-blocking if the two ports are on the same line card (i.e. using the same crossbar). Once you start traversing multiple crossbars, you are sharing links and can experience congestion.</div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Certainly you can have totally separate fabrics and each host could<br>
have links to one or more of those.<br>
<br>
If this was Ethernet, you'd comparing separate networks vs multiple<br>
interfaces on the same network vs bonded interfaces on the same<br>
network. Not all the concepts translate directly, the main one being<br>
the default network layout, Mellanox will suggest a strict fat tree.<br>
<br>
Furthermore, your question really just comes down to performance.<br>
Leave IB out of it. You're asking: is an interconnect with such and<br>
such throughput and latency sufficient for my heterogeneous workload<br>
comprised of bulk data transfers and small messages. Only you can<br>
answer that.<br>
</blockquote>
<br></div>
This question does not "come down to performance", and this question is specifically about IB, so there's no way to leave IB out of it.<br>
<br>
This is really a business/economics question as much as it's about performance: Is it possible to saturate FDR IB, and if so, how often does it happen? How much will it cost for a larger or second IB switch and double the number of cables to make this happen? And how hard will it be to set up? Will the increased TCO be justified increase in performance? How can I measure the increase in performance? How can I measure, in real-time, the load on my IB fabric, and collect that data to see if the investment paid off?<br>
</blockquote><div><br></div><div>Generally (lots of hand waving), HPC does not saturate the fabric for IPC unless is it a many-to-one (e.g. collective). Where lots of bandwidth makes the most difference is for I/O. Distributed file systems probably put the most bandwidth load on the system.</div>
<div> </div><div>Scott</div></div></div></div>