<div dir="ltr">>in theory you could cap the performance interference using VM's and<br>
>cgroup controls, but i'm not sure how effective that actually is (no<br><div>
>data) in HPC.</div><div><br></div><div>I looked quite heavily at performance capping for RDMA applications in cgroups about a year ago.</div><div>It is very doable, however you need a recent 4-series kernel. Sadly we were using 3-series kernels on RHEL</div><div>Parav Pandit is the go-to guy for this <a href="https://www.openfabrics.org/images/eventpresos/2016presentations/115rdmacont.pdf">https://www.openfabrics.org/images/eventpresos/2016presentations/115rdmacont.pdf</a><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><br></div><br><div class="gmail_quote"><div dir="ltr">On Thu, 26 Jul 2018 at 15:27, John Hearns <<a href="mailto:hearnsj@googlemail.com">hearnsj@googlemail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>For VM substitute 'container' - since containerisation is intimately linked with cgroups anyway.</div><div>Google 'CEPH Docker' and there is plenty of information.<br></div><div><br></div><div>Someone I work with tried out CEPH on Dockerr the other day, and got into some knots regarding access to the actual hardware devices.<br></div><div>He then downloaded Minio and got it working very rapidly. Sorry - I am only repeating this story second hand.</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, 26 Jul 2018 at 15:20, Michael Di Domenico <<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, Jul 26, 2018 at 3:14 AM, Jörg Saßmannshausen<br>
<<a href="mailto:sassy-work@sassy.formativ.net" target="_blank">sassy-work@sassy.formativ.net</a>> wrote:<br>
> I once had this idea as well: using the spinning discs which I have in the<br>
> compute nodes as part of a distributed scratch space. I was using glusterfs<br>
> for that as I thought it might be a good idea. It was not.<br>
<br>
i split the thread as to not pollute the other discussion.<br>
<br>
I'm curious if anyone has any hard data on the above, but<br>
encapsulating the compute from the storage using VM's instead of just<br>
separate processes?<br>
<br>
in theory you could cap the performance interference using VM's and<br>
cgroup controls, but i'm not sure how effective that actually is (no<br>
data) in HPC.<br>
<br>
I've been thinking about this recently to rebalance some of the rack<br>
loading throughout my data center. yes, i can move things around<br>
within the racks, but then it turns into a cabling nightmare.<br>
<br>
discuss?<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>
</blockquote></div>