[Beowulf] Containers in HPC
Brian Dobbins
bdobbins at gmail.com
Thu May 23 10:10:33 PDT 2019
Hi Lance,
For single node jobs MPI can be run with the MPI binary from the container
> with native performance for the shared memory type messages. This has
> worked without issue since the very early days of Singularity. The only
> tricky part has been multi-node and multi-container.
>
Thanks for the reply - I guess I'm curious where the 'tricky' bits are at
this point. For cross-node, container-per-rank jobs, I think the ABI
compatibility stuff ensures (even if not done automagically) that you get
'native' performance, but the same-node, container-per-rank stuff is where
I'm still unsure what happens. In theory, with the run being just a
process, it *should* be doable, but I don't know if there's some glue that
needs to happen, or has already happened.
If nobody knows offhand, it's on my to-do list to test this, I just
haven't found the time yet. I'll do so and update the list once I'm able.
Cheers,
- Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20190523/b34d4547/attachment.html>
More information about the Beowulf
mailing list