[Beowulf] experience with HPC running on OpenStack

Jörg Saßmannshausen sassy-work at sassy.formativ.net
Wed Jul 8 02:25:08 PDT 2020

Hi John,

thanks for the links. I know Martyn personally and of course I know his 
(in)famous talks at the CIUK workshops. 
The InfiniBand is not for storage (I still consider using IB for storage a 
waste) it is for parallel computing where you need the low latency. As far as 
I can see even the latest Ethernet technology is not as fast as IB is (fast as 
in low latency). 
We are currently working out how many cores users are using so we might get 
away with a few multi core machines. 

All the best


Am Mittwoch, 1. Juli 2020, 05:09:17 BST schrieb John Hearns:
> Jorg, I would back up what Matt Wallis says. What benefits would Openstack
> bring you ?
> Do you need to set up a flexible infrastructure where clusters can be
> created on demand for specific projects?
> Regarding Infiniband the concept is SR-IOV. This article is worth reading:
> https://docs.openstack.org/neutron/pike/admin/config-sriov.html
> I would take a step back and look at your storage technology and which is
> the best one to be going forward with.
> Also look at the proceeding sof the last STFC Computing Insights where
> Martyn Guest presented  a lot of
> benchmarking results   on AMD Rome
> Page 103 onwards in this report
> http://purl.org/net/epubs/manifestation/46387165/DL-CONF-2020-001.pdf
> On Tue, 30 Jun 2020 at 12:21, Jörg Saßmannshausen <
> sassy-work at sassy.formativ.net> wrote:
> > Dear all,
> > 
> > we are currently planning a new cluster and this time around the idea was
> > to
> > use OpenStack for the HPC part of the cluster as well.
> > 
> > I was wondering if somebody has some first hand experiences on the list
> > here.
> > One of the things we currently are not so sure about it is InfiniBand (or
> > another low latency network connection but not ethernet): Can you run HPC
> > jobs
> > on OpenStack which require more than the number of cores within a box? I
> > am
> > thinking of programs like CP2K, GROMACS, NWChem (if that sounds familiar
> > to
> > you) which utilise these kind of networks very well.
> > 
> > I cam across things like MagicCastle from Computing Canada but as far as I
> > understand it, they are not using it for production (yet).
> > 
> > Is anybody on here familiar with this?
> > 
> > All the best from London
> > 
> > Jörg
> > 
> > 
> > 
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit
> > https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

More information about the Beowulf mailing list