[Beowulf] RoCE vs. InfiniBand
Lance Wilson
lance.wilson at monash.edu
Fri Nov 27 00:07:25 UTC 2020
We are a ROCE shop with HPC on private cloud and have had a mostly good
experience. We have had quite a number of issues over time that were bugs
and needed vendor support to resolve. Some of which have taken a long time.
So from a maturity perspective IB is definitely much better. As a matter of
priority ensure that all the kit is RoCE V2, V1 kit has a number of
unresolved issues.
I do like that it is ethernet though, especially with all of the cloud kit.
If your workloads are heavy on the MPI end you might be better off with IB,
but my communities are <500 core jobs.
Very happy to answer questions on our experience.
Cheers,
Lance
--
Dr Lance Wilson
Technical Lead ACCS Characterisation Virtual Laboratory (CVL) &
Activity Lead HPC
Ph: 03 99055942 (+61 3 99055942)
Mobile: 0437414123 (+61 4 3741 4123)
Multi-modal Australian ScienceS Imaging and Visualisation Environment
(www.massive.org.au)
Monash University
On Thu, 26 Nov 2020 at 23:52, Gilad Shainer <Shainer at mellanox.com> wrote:
> Let me try to help:
>
> - OpenStack is supported natively on InfiniBand already,
> therefore there is no need to go to Ethernet for that
>
> - File system wise, you can have IB file system, and connect
> directly to IB system.
>
> - Depends on the distance, you can run 2Km IB between switches,
> or use Mellanox MetroX for connecting over 40Km. VicinityIO have system
> that go over thousands of miles…
>
> - IB advantages are with much lower latency (switches alone are
> 3X lower latency), cost effectiveness (for the same speed, IB switches are
> more cost effective than Ethernet) and the In-Network Computing engines
> (MPI reduction operations, Tag Matching run on the network)
>
>
>
> If you need help, feel free to contact directly.
>
>
>
> Regards,
>
> Gilad Shainer
>
>
>
> *From:* Beowulf [mailto:beowulf-bounces at beowulf.org] *On Behalf Of *John
> Hearns
> *Sent:* Thursday, November 26, 2020 3:42 AM
> *To:* Jörg Saßmannshausen <sassy-work at sassy.formativ.net>; Beowulf
> Mailing List <beowulf at beowulf.org>
> *Subject:* Re: [Beowulf] RoCE vs. InfiniBand
>
>
>
> *External email: Use caution opening links or attachments*
>
>
>
> Jorg, I think I might know where the Lustre storage is !
>
> It is possible to install storage routers, so you could route between
> ethernet and infiniband.
>
> It is also worth saying that Mellanox have Metro Infiniband switches -
> though I do not think they go as far as the west of London!
>
>
>
> Seriously though , you ask about RoCE. I will stick my neck out and say
> yes, if you are planning an Openstack cluster
>
> with the intention of having mixed AI and 'traditional' HPC workloads I
> would go for a RoCE style setup.
>
> In fact I am on a discussion about a new project for a customer with
> similar aims in an hours time.
>
>
>
> I could get some benchmarking time if you want to do a direct comparison
> of Gromacs on IB / RoCE
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, 26 Nov 2020 at 11:14, Jörg Saßmannshausen <
> sassy-work at sassy.formativ.net> wrote:
>
> Dear all,
>
> as the DNS problems have been solve (many thanks for doing this!), I was
> wondering if people on the list have some experiences with this question:
>
> We are currently in the process to purchase a new cluster and we want to
> use
> OpenStack for the whole management of the cluster. Part of the cluster
> will
> run HPC applications like GROMACS for example, other parts typical
> OpenStack
> applications like VM. We also are implementing a Data Safe Haven for the
> more
> sensitive data we are aiming to process. Of course, we want to have a
> decent
> size GPU partition as well!
>
> Now, traditionally I would say that we are going for InfiniBand. However,
> for
> reasons I don't want to go into right now, our existing file storage
> (Lustre)
> will be in a different location. Thus, we decided to go for RoCE for the
> file
> storage and InfiniBand for the HPC applications.
>
> The point I am struggling is to understand if this is really the best of
> the
> solution or given that we are not building a 100k node cluster, we could
> use
> RoCE for the few nodes which are doing parallel, read MPI, jobs too.
> I have a nagging feeling that I am missing something if we are moving to
> pure
> RoCE and ditch the InfiniBand. We got a mixed workload, from ML/AI to MPI
> applications like GROMACS to pipelines like they are used in the
> bioinformatic
> corner. We are not planning to partition the GPUs, the current design
> model is
> to have only 2 GPUs in a chassis.
> So, is there something I am missing or is the stomach feeling I have
> really a
> lust for some sushi? :-)
>
> Thanks for your sentiments here, much welcome!
>
> All the best from a dull London
>
> Jörg
>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbeowulf.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fbeowulf&data=04%7C01%7CShainer%40nvidia.com%7C8e220b6be2fa48921dce08d892005b27%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637419877513157960%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0NLRDQHkYol82mmqs%2BQrFryEuitIpDss2NwgIeyg1K8%3D&reserved=0>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20201127/28bb131d/attachment.htm>
More information about the Beowulf
mailing list