[Beowulf] HPC workflows

John Hearns hearnsj at googlemail.com
Fri Dec 7 07:19:30 PST 2018


Good points regarding packages shipped with distributions.
One of my pet peeves (only one? Editor) is being on mailiing lists for HPC
software such as OpenMPI and Slurm and seeing many requests along the lines
of
"I installed PackageX on my cluster" and then finding fromt he replies that
the versiion is a very out of date one delivered by the distribution's
repositories.

The other day I Was interacting with someone who was using a CentOS 6.5
cluster on the Julia discussion list. His cluster uses the original SGE
version.
I created a test CentOS 6.5 cluster using Vagrant and Ansible, and found to
my horror that Gridengine RPMs are available out of the box with CentOS 6.5
Now let me make something clear - a good few years ago I installed SGE on
customer clusters, and became somewhat of an expert in SGE and MPI
integration.
But in 2018? Would I advise installing the original Sun version ff SGE?
No.  (I am not referring to Univa etc which is excellent)

There is deifnitely a place for packaging and delivery of up to date
software stacks for HPC.
If I might mention Bright Computing - that is what they do./ The compile up
(or instance) SLurm and put it on their own repos.
So you can have a tested set of packages without continually rolling your
own.

I hate to say it, I think the current generation of WEb developers, who
will incorporate some Javascript from an online repo to do a bitshift
(I am referrin gto the famous package which the developer took down and
which affected thousands of web sites),
are only too ready to install software without thinking from the Ubuntu
repos. That might work for web services stacks - but for HPC?

Perhaps for another thread:
Actually I went t the AWS USer Group in the UK on Wednesday. Ver
impressive, and there are the new Lustre filesystems and MPI networking.
I guess the HPC World will see the same philosophy of building your setup
using the AWS toolkit as Uber etc. etc. do today.
Also a lot of noise is being made at the moment about the convergence of
HPC and Machine Learning workloads.
Are we going to see the MAchine Learning folks adapting their workflows to
run on HPC on-premise bare metal clusters?
Or are we going to see them go off and use AWS (Azure, Google ?)































On Fri, 7 Dec 2018 at 16:04, Gerald Henriksen <ghenriks at gmail.com> wrote:

> On Wed, 5 Dec 2018 09:35:07 -0800, you wrote:
>
> >Certainly the inability of distros to find the person-hours to package
> >everything plays a role as well, your cause and effect chain there is
> >pretty accurate. Where I begin to branch is at the idea of software that
> is
> >unable to be packaged in an rpm/deb.
>
> In some convenient timing, the following was posted by overtmind on
> Reddit discussing why Atom hasn't been packaged for Fedora(*):
>
> ---
> "This means, for every nodejs dependency Electron needs - and there
> are a metric #$%# ton - since you can't use npm as an
> installer/package manager - you need to also package all of those and
> make sure they're in fedora and up-to-date, and then you also need to
> package all of the non-nodejs dependencies that come along with
> Electron apps, such as electron itself, and THEN you need to extract
> and remove all of the vendor'd libraries and binaries that essentially
> make Electron work, and THEN you need to make sure that there's no
> side-car'd non-free or questionable software that is forbidden in
> fedora also, like ffmpeg. G'head and look at the Chromium SPEC, it's a
> living nightmare (Spot godbless your heart)"
> ---
>
> Now obviously you could do what for example Java does with a jar file,
> and simply throw everything into a single rpm/deb and ignore the
> packaging guidelines, but then you are back to in essence creating a
> container and just hiding it behind a massive rpm/deb.
>
> >The thing we can never measure and thus can only speculate about forever
> >is:  if all the person-hours poured into containers (and pypi/pip and cran
> >and cpan and maven and scons and ...) had been poured into rpm/deb
> >packaging would we just be simply apt/yum/dnf installing what we needed
> >today? (I'm ignoring other OS/packaging tools, but you get the idea.)
>
> I (theoretically) could write a new library in
> Python/Perl/Javascript/Go/etc. and with very minimal effort can place
> that library in the repository for that language with minimal effort.
> My library is now available to everyone using that language regardless
> of what OS they are using.
>
> Alternately, I could spend many, many hours perhaps even days learning
> multiple different packaging systems, joining multiple different
> mailing lists / bugzillas / build systems, so that I can make my
> library easily available to people on Windows, macOS, Fedora, RHEL,
> openSUSE, Debian, Ubuntu, ...  - or alternately hope that someone will
> not only take the time to package my library for all those different
> platforms, but also commit the future time to keep it up to date.
>
> Option 2 worked 20 years ago when we only cared about 2 or 3
> distributions of Linux and had a lot less open source / free software.
> But, unfortunately, it does not scale and so for that reason (and a
> few others) the effort to create Docker / npm, maven, etc. is the
> lesser of the options.
>
> * - https://www.reddit.com/r/Fedora/comments/a3q1a2/atom_editoride/
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20181207/10c1b422/attachment.html>


More information about the Beowulf mailing list