<div dir="ltr">Hi Brian,<div>For single node jobs MPI can be run with the MPI binary from the container with native performance for the shared memory type messages. This has worked without issue since the very early days of Singularity. The only tricky part has been multi-node and multi-container.</div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Cheers,<br><br>Lance<br>--<br>Dr Lance Wilson<br>Characterisation Virtual Laboratory (CVL) Coordinator &</div><div dir="ltr">Senior HPC Consultant</div><div>Ph: 03 99055942 (+61 3 99055942)</div><div dir="ltr">Mobile: 0437414123 (+61 4 3741 4123)</div><div dir="ltr">Multi-modal Australian ScienceS Imaging and Visualisation Environment<br>(<a href="http://www.massive.org.au/" rel="noreferrer" style="color:rgb(17,85,204)" target="_blank">www.massive.org.au</a>)<br>Monash University<br></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 22 May 2019 at 23:49, Brian Dobbins <<a href="mailto:bdobbins@gmail.com">bdobbins@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div>Thanks, Gerald - I'll be reading this shortly. And to add to any discussion, here's the Blue Waters container paper that I like to point people towards - from the same conference, in fact:</div><div><a href="https://arxiv.org/pdf/1808.00556.pdf" target="_blank">https://arxiv.org/pdf/1808.00556.pdf</a><br></div><div><br></div><div>The key thing here is achieving <i>native</i> network performance through the MPICH ABI compatibility layer[1]. This is such a key technology. Prior to this, I was slightly negative about containers, figuring MPI compatibility/performance was an issue - now, I'm eager to containerize some of our applications, as it can dramatically simplify installation/configuration for non-expert users.</div><div><br></div><div>One thing I'm less certain about, and would welcome any information on, is whether things like Linux's cross-memory attach (XPMEM / CMA) can work across containers for MPI messages on the same node. Since it's the same host kernel, I'm somewhat inclined to think so, but I haven't yet had the time to run any tests. Anyway, given the complexity of a lot of projects these days, native performance in a containerized environment is pretty much the best of both worlds.</div><div><br></div><div>[1] MPICH ABI Compatibility Initiative : <a href="https://www.mpich.org/abi/" target="_blank">https://www.mpich.org/abi/</a></div><div><br></div><div>Cheers,</div><div> - Brian</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, May 22, 2019 at 7:10 AM Gerald Henriksen <<a href="mailto:ghenriks@gmail.com" target="_blank">ghenriks@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Paper on arXiv that may be of interest to some as it may be where HPC<br>
is heading even for private clusters:<br>
<br>
Evalutation of Docker Containers for Scientific Workloads in the Cloud<br>
<a href="https://arxiv.org/abs/1905.08415" rel="noreferrer" target="_blank">https://arxiv.org/abs/1905.08415</a><br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>