<div dir="ltr">We are a ROCE shop with HPC on private cloud and have had a mostly good experience. We have had quite a number of issues over time that were bugs and needed vendor support to resolve. Some of which have taken a long time. So from a maturity perspective IB is definitely much better. As a matter of priority ensure that all the kit is RoCE V2, V1 kit has a number of unresolved issues. <div><br></div><div>I do like that it is ethernet though, especially with all of the cloud kit. If your workloads are heavy on the MPI end you might be better off with IB, but my communities are <500 core jobs.</div><div><br></div><div>Very happy to answer questions on our experience.<br><div><br></div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Cheers,<br><br>Lance<br>--<br>Dr Lance Wilson<br><div dir="ltr">Technical Lead ACCS Characterisation Virtual Laboratory (CVL) &</div><div dir="ltr">Activity Lead HPC</div></div><div>Ph: 03 99055942 (+61 3 99055942)</div><div dir="ltr">Mobile: 0437414123 (+61 4 3741 4123)</div><div dir="ltr">Multi-modal Australian ScienceS Imaging and Visualisation Environment<br>(<a href="http://www.massive.org.au/" rel="noreferrer" style="color:rgb(17,85,204)" target="_blank">www.massive.org.au</a>)<br>Monash University<br></div></div></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, 26 Nov 2020 at 23:52, Gilad Shainer <<a href="mailto:Shainer@mellanox.com">Shainer@mellanox.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div lang="EN-US">
<div class="gmail-m_3676818464350151208WordSection1">
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">Let me try to help:<u></u><u></u></span></p>
<p class="gmail-m_3676818464350151208MsoListParagraph"><u></u><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><span>-<span style="font-style:normal;font-variant-caps:normal;font-weight:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">
</span></span></span><u></u><span dir="LTR"></span><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">OpenStack is supported natively on InfiniBand already, therefore there is no need to go to Ethernet for that<u></u><u></u></span></p>
<p class="gmail-m_3676818464350151208MsoListParagraph"><u></u><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><span>-<span style="font-style:normal;font-variant-caps:normal;font-weight:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">
</span></span></span><u></u><span dir="LTR"></span><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">File system wise, you can have IB file system, and connect directly to IB system.<u></u><u></u></span></p>
<p class="gmail-m_3676818464350151208MsoListParagraph"><u></u><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><span>-<span style="font-style:normal;font-variant-caps:normal;font-weight:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">
</span></span></span><u></u><span dir="LTR"></span><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">Depends on the distance, you can run 2Km IB between switches, or use Mellanox MetroX for connecting over 40Km. VicinityIO have
system that go over thousands of miles…<u></u><u></u></span></p>
<p class="gmail-m_3676818464350151208MsoListParagraph"><u></u><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><span>-<span style="font-style:normal;font-variant-caps:normal;font-weight:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">
</span></span></span><u></u><span dir="LTR"></span><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">IB advantages are with much lower latency (switches alone are 3X lower latency), cost effectiveness (for the same speed, IB switches
are more cost effective than Ethernet) and the In-Network Computing engines (MPI reduction operations, Tag Matching run on the network)<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">If you need help, feel free to contact directly.
<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><u></u> <u></u></span></p>
<div>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">Regards,
<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">Gilad Shainer<u></u><u></u></span></p>
</div>
<p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><u></u> <u></u></span></p>
<div>
<div style="border-style:solid none none;border-top-width:1pt;border-top-color:rgb(225,225,225);padding:3pt 0in 0in">
<p class="MsoNormal"><b><span style="font-size:11pt;font-family:Calibri,sans-serif">From:</span></b><span style="font-size:11pt;font-family:Calibri,sans-serif"> Beowulf [mailto:<a href="mailto:beowulf-bounces@beowulf.org" target="_blank">beowulf-bounces@beowulf.org</a>]
<b>On Behalf Of </b>John Hearns<br>
<b>Sent:</b> Thursday, November 26, 2020 3:42 AM<br>
<b>To:</b> Jörg Saßmannshausen <<a href="mailto:sassy-work@sassy.formativ.net" target="_blank">sassy-work@sassy.formativ.net</a>>; Beowulf Mailing List <<a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a>><br>
<b>Subject:</b> Re: [Beowulf] RoCE vs. InfiniBand<u></u><u></u></span></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<table border="1" cellpadding="0" style="background-color:rgb(255,235,156);background-position:initial initial;background-repeat:initial initial">
<tbody>
<tr>
<td style="padding:0.75pt">
<p class="MsoNormal"><b><span style="font-size:7.5pt;font-family:Verdana,sans-serif;color:black">External email: Use caution opening links or attachments</span></b><span style="font-size:7.5pt;font-family:Verdana,sans-serif;color:black">
</span><u></u><u></u></p>
</td>
</tr>
</tbody>
</table>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">Jorg, I think I might know where the Lustre storage is ! <u></u>
<u></u></p>
<div>
<p class="MsoNormal">It is possible to install storage routers, so you could route between ethernet and infiniband.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">It is also worth saying that Mellanox have Metro Infiniband switches - though I do not think they go as far as the west of London!<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Seriously though , you ask about RoCE. I will stick my neck out and say yes, if you are planning an Openstack cluster<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">with the intention of having mixed AI and 'traditional' HPC workloads I would go for a RoCE style setup.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">In fact I am on a discussion about a new project for a customer with similar aims in an hours time.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">I could get some benchmarking time if you want to do a direct comparison of Gromacs on IB / RoCE<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Thu, 26 Nov 2020 at 11:14, Jörg Saßmannshausen <<a href="mailto:sassy-work@sassy.formativ.net" target="_blank">sassy-work@sassy.formativ.net</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-style:none none none solid;border-left-width:1pt;border-left-color:rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal">Dear all,<br>
<br>
as the DNS problems have been solve (many thanks for doing this!), I was <br>
wondering if people on the list have some experiences with this question:<br>
<br>
We are currently in the process to purchase a new cluster and we want to use <br>
OpenStack for the whole management of the cluster. Part of the cluster will <br>
run HPC applications like GROMACS for example, other parts typical OpenStack <br>
applications like VM. We also are implementing a Data Safe Haven for the more <br>
sensitive data we are aiming to process. Of course, we want to have a decent <br>
size GPU partition as well!<br>
<br>
Now, traditionally I would say that we are going for InfiniBand. However, for <br>
reasons I don't want to go into right now, our existing file storage (Lustre) <br>
will be in a different location. Thus, we decided to go for RoCE for the file <br>
storage and InfiniBand for the HPC applications. <br>
<br>
The point I am struggling is to understand if this is really the best of the <br>
solution or given that we are not building a 100k node cluster, we could use <br>
RoCE for the few nodes which are doing parallel, read MPI, jobs too. <br>
I have a nagging feeling that I am missing something if we are moving to pure <br>
RoCE and ditch the InfiniBand. We got a mixed workload, from ML/AI to MPI <br>
applications like GROMACS to pipelines like they are used in the bioinformatic <br>
corner. We are not planning to partition the GPUs, the current design model is <br>
to have only 2 GPUs in a chassis. <br>
So, is there something I am missing or is the stomach feeling I have really a <br>
lust for some sushi? :-)<br>
<br>
Thanks for your sentiments here, much welcome!<br>
<br>
All the best from a dull London<br>
<br>
Jörg<br>
<br>
<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbeowulf.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fbeowulf&data=04%7C01%7CShainer%40nvidia.com%7C8e220b6be2fa48921dce08d892005b27%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637419877513157960%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0NLRDQHkYol82mmqs%2BQrFryEuitIpDss2NwgIeyg1K8%3D&reserved=0" target="_blank">
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><u></u><u></u></p>
</blockquote>
</div>
</div>
</div>
</div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>