[Beowulf] Containers in HPC

Jonathan Aquilina jaquilina at eagleeyet.net
Thu May 23 03:49:20 PDT 2019


Hi Guys,

Can  someone clarify for me are containers another form of virtualized systems? Or are they isolated environments running on bare metal?

Regards,
Jonathan

From: Beowulf <beowulf-bounces at beowulf.org> on behalf of Lance Wilson via Beowulf <beowulf at beowulf.org>
Reply to: Lance Wilson <lance.wilson at monash.edu>
Date: Thursday, 23 May 2019 at 01:27
To: Brian Dobbins <bdobbins at gmail.com>
Cc: "beowulf at beowulf.org" <beowulf at beowulf.org>
Subject: Re: [Beowulf] Containers in HPC

Hi Brian,
For single node jobs MPI can be run with the MPI binary from the container with native performance for the shared memory type messages. This has worked without issue since the very early days of Singularity. The only tricky part has been multi-node and multi-container.

Cheers,

Lance
--
Dr Lance Wilson
Characterisation Virtual Laboratory (CVL) Coordinator &
Senior HPC Consultant
Ph: 03 99055942 (+61 3 99055942)
Mobile: 0437414123 (+61 4 3741 4123)
Multi-modal Australian ScienceS Imaging and Visualisation Environment
(www.massive.org.au<http://www.massive.org.au/>)
Monash University


On Wed, 22 May 2019 at 23:49, Brian Dobbins <bdobbins at gmail.com<mailto:bdobbins at gmail.com>> wrote:

Thanks, Gerald - I'll be reading this shortly.  And to add to any discussion, here's the Blue Waters container paper that I like to point people towards - from the same conference, in fact:
https://arxiv.org/pdf/1808.00556.pdf

The key thing here is achieving native network performance through the MPICH ABI compatibility layer[1].  This is such a key technology.  Prior to this, I was slightly negative about containers, figuring MPI compatibility/performance was an issue - now, I'm eager to containerize some of our applications, as it can dramatically simplify installation/configuration for non-expert users.

One thing I'm less certain about, and would welcome any information on, is whether things like Linux's cross-memory attach (XPMEM / CMA) can work across containers for MPI messages on the same node.  Since it's the same host kernel, I'm somewhat inclined to think so, but I haven't yet had the time to run any tests.  Anyway, given the complexity of a lot of projects these days, native performance in a containerized environment is pretty much the best of both worlds.

[1] MPICH ABI Compatibility Initiative : https://www.mpich.org/abi/

Cheers,
  - Brian


On Wed, May 22, 2019 at 7:10 AM Gerald Henriksen <ghenriks at gmail.com<mailto:ghenriks at gmail.com>> wrote:
Paper on arXiv that may be of interest to some as it may be where HPC
is heading even for private clusters:

Evalutation of Docker Containers for Scientific Workloads in the Cloud
https://arxiv.org/abs/1905.08415
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20190523/b8283e9e/attachment-0001.html>


More information about the Beowulf mailing list