<div dir="ltr">Hi John,<div>For singularity containers there isn't any need to integrate with the scheduler as the containers run as normal user programs. They are different to docker containers as they don't have/need root to run. The cluster itself does need to have singularity installed as it runs a setuid binary to run the container. They are a super convenient way of getting around all the software dependency issues on our Centos cluster.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Cheers,<br><br>Lance<br>--<br>Dr Lance Wilson<br>Senior HPC Consultant</div><div>Ph: 03 99055942 (+61 3 99055942</div><div dir="ltr">Mobile: 0437414123 (+61 4 3741 4123)</div><div dir="ltr">Multi-modal Australian ScienceS Imaging and Visualisation Environment<br>(<a href="http://www.massive.org.au/" rel="noreferrer" style="color:rgb(17,85,204)" target="_blank">www.massive.org.au</a>)<br>Monash University<br></div></div></div></div></div>
<br><div class="gmail_quote">On 17 June 2017 at 00:14, John Hearns <span dir="ltr"><<a href="mailto:hearnsj@googlemail.com" target="_blank">hearnsj@googlemail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thanks Josh. Am I familiar with modifying Python code and PBS hook scripts?</div><div>Yes - I have had my head under the hood of PBS hooks for a long time.</div><div>Hence the pronounced stutter and my predelection to randomly scream out loud in public places.</div><div><br></div><div><br></div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 16 June 2017 at 15:48, Josh Catana <span dir="ltr"><<a href="mailto:jcatana@gmail.com" target="_blank">jcatana@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">I know they have a canned scheduler hook to run docker. If you're familiar with python modifying their code to run singularity shouldn't be difficult. I rewrote their hook to operate in my environment pretty easily.</div><div class="m_9208386229525666443HOEnZb"><div class="m_9208386229525666443h5"><div class="gmail_extra"><br><div class="gmail_quote">On Jun 16, 2017 4:29 AM, "John Hearns" <<a href="mailto:hearnsj@googlemail.com" target="_blank">hearnsj@googlemail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><div dir="ltr"><div>Lance, thankyou very much for the reply. I will look at Docker for those 'system' type tasks also.</div><div><br></div><div>Regarding Singularity does anyone know much about Singularity integration with PBSPro?</div><div>I guess I could actually ask Altair....</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 16 June 2017 at 01:30, Lance Wilson <span dir="ltr"><<a href="mailto:lance.wilson@monash.edu" target="_blank">lance.wilson@monash.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><div dir="ltr">Hi John,<div>In regards to your Singularity question we are using cgroups for the containers. Mostly the containers are used in Slurm jobs which creates the appropriate cgroups. We are also using the gpu driver passthrough functionality of Singularity now for our machine learning and cryoem processing containers which have the cgroups applied to gpus.</div><div><br></div><div>Back to your systems containers questions many of our systems have been put into docker containers as they run on same/similar operating system and still need root to function correctly. Pretty much every new system thing we do is scripted and put into a container so that we can recover quickly in an outage scenario and move around things as part of our larger cloud (private and public) strategy.</div></div><div class="gmail_extra"><br clear="all"><div><div class="m_9208386229525666443m_1714226312258785668m_3605233562507162297m_4417907763180398435gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">Cheers,<br><br>Lance<br>--<br>Dr Lance Wilson<br>Senior HPC Consultant</div><div>Ph: 03 99055942 <a href="tel:+61%203%209905%205942" value="+61399055942" target="_blank">(+61 3 99055942</a></div><div dir="ltr">Mobile: 0437414123 (+61 4 3741 4123)</div><div dir="ltr">Multi-modal Australian ScienceS Imaging and Visualisation Environment<br>(<a style="color:rgb(17,85,204)" href="http://www.massive.org.au/" rel="noreferrer" target="_blank">www.massive.org.au</a>)<br>Monash University<br></div></div></div></div></div>
<br><div class="gmail_quote"><div><div class="m_9208386229525666443m_1714226312258785668m_3605233562507162297h5">On 15 June 2017 at 20:06, John Hearns <span dir="ltr"><<a href="mailto:hearnsj@googlemail.com" target="_blank">hearnsj@googlemail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><div><div class="m_9208386229525666443m_1714226312258785668m_3605233562507162297h5"><div dir="ltr"><div>I'm not sure this post is going to make a lot of sense. But please bear with me!</div><div>For applications containers are possible using Singularity or Docker of course.</div><div><br></div><div>In HPC clusters we tend to have several 'service node' activities, such as the cluster management/ head node, perhaps separate provisioning nodes to spread the load, batch queue system masters, monitoring setups, job submission and dedicated storage nodes.</div><div><br></div><div>These can all of course be run on a single cluster head node in a small setup (with the exception of the storage nodes). In a larger setup you can run these services in virtual machines.</div><div><br></div><div>What I am asking is anyone using technologies such as LXD containers to run these services?</div><div>I was inspired by an Openstack talk by James Page at Canonical, where all the Opestack services were deployed by Juju charms onto LXD containers.</div><div>So we pack all the services into containers on physical server(s) which makes moving them or re-deploying things very flexible.</div><div><a href="https://www.youtube.com/watch?v=5orzBITR3X8" target="_blank">https://www.youtube.com/watch?<wbr>v=5orzBITR3X8</a></div><div><br></div><div>While I'm talking abotu containers, is anyone deploying singularity containers in cgroups, and limiting the resources they can use (I'm specifically thinking of RDMA here).</div><div><br></div><div><br></div><div><br></div><div>ps. I have a terrible sense of deja vu here... I think I asked the Singularity question a month ago.</div><div>I plead insanity m'lord</div><div><br></div><div><br></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
<br></blockquote></div></div>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/<wbr>mailman/listinfo/beowulf</a><br>
<br></blockquote></div><br></div>